=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-20220701225037-10065 --alsologtostderr -v=1 --driver=docker --container-runtime=docker
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220701225037-10065 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (58.075029745s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20220701225037-10065] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14483
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on existing profile
* Starting control plane node pause-20220701225037-10065 in cluster pause-20220701225037-10065
* Pulling base image ...
* Updating the running docker "pause-20220701225037-10065" container ...
* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-20220701225037-10065" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0701 22:51:54.837649 199091 out.go:296] Setting OutFile to fd 1 ...
I0701 22:51:54.837762 199091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:51:54.837772 199091 out.go:309] Setting ErrFile to fd 2...
I0701 22:51:54.837776 199091 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:51:54.838170 199091 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
I0701 22:51:54.838388 199091 out.go:303] Setting JSON to false
I0701 22:51:54.840127 199091 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2067,"bootTime":1656713848,"procs":693,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 22:51:54.840208 199091 start.go:125] virtualization: kvm guest
I0701 22:51:54.843890 199091 out.go:177] * [pause-20220701225037-10065] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0701 22:51:54.845208 199091 notify.go:193] Checking for updates...
I0701 22:51:54.845210 199091 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:51:54.846521 199091 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:51:54.847793 199091 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:51:54.849143 199091 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
I0701 22:51:54.850429 199091 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 22:51:54.852095 199091 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:51:54.852516 199091 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:51:54.919379 199091 docker.go:137] docker version: linux-20.10.17
I0701 22:51:54.919532 199091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:51:55.057384 199091 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:68 SystemTime:2022-07-01 22:51:54.95811442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:51:55.057494 199091 docker.go:254] overlay module found
I0701 22:51:55.059070 199091 out.go:177] * Using the docker driver based on existing profile
I0701 22:51:55.060851 199091 start.go:284] selected driver: docker
I0701 22:51:55.060870 199091 start.go:808] validating driver "docker" against &{Name:pause-20220701225037-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:51:55.060998 199091 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:51:55.061083 199091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:51:55.182407 199091 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:68 SystemTime:2022-07-01 22:51:55.096323909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:51:55.183253 199091 cni.go:95] Creating CNI manager for ""
I0701 22:51:55.183278 199091 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:51:55.183292 199091 start_flags.go:310] config:
{Name:pause-20220701225037-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:51:55.186655 199091 out.go:177] * Starting control plane node pause-20220701225037-10065 in cluster pause-20220701225037-10065
I0701 22:51:55.188049 199091 cache.go:120] Beginning downloading kic base image for docker with docker
I0701 22:51:55.189243 199091 out.go:177] * Pulling base image ...
I0701 22:51:55.190641 199091 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:51:55.190687 199091 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
I0701 22:51:55.190702 199091 cache.go:57] Caching tarball of preloaded images
I0701 22:51:55.190763 199091 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
I0701 22:51:55.190943 199091 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 22:51:55.190973 199091 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on docker
I0701 22:51:55.191097 199091 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/config.json ...
I0701 22:51:55.229856 199091 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
I0701 22:51:55.229886 199091 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
I0701 22:51:55.229903 199091 cache.go:208] Successfully downloaded all kic artifacts
I0701 22:51:55.229943 199091 start.go:352] acquiring machines lock for pause-20220701225037-10065: {Name:mk02b7c6bb700c3f8d74cc59445b685e3838e9eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 22:51:55.230040 199091 start.go:356] acquired machines lock for "pause-20220701225037-10065" in 72.371µs
I0701 22:51:55.230069 199091 start.go:94] Skipping create...Using existing machine configuration
I0701 22:51:55.230077 199091 fix.go:55] fixHost starting:
I0701 22:51:55.230358 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:51:55.263675 199091 fix.go:103] recreateIfNeeded on pause-20220701225037-10065: state=Running err=<nil>
W0701 22:51:55.263705 199091 fix.go:129] unexpected machine state, will restart: <nil>
I0701 22:51:55.266113 199091 out.go:177] * Updating the running docker "pause-20220701225037-10065" container ...
I0701 22:51:55.267549 199091 machine.go:88] provisioning docker machine ...
I0701 22:51:55.267574 199091 ubuntu.go:169] provisioning hostname "pause-20220701225037-10065"
I0701 22:51:55.267618 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:55.303052 199091 main.go:134] libmachine: Using SSH client type: native
I0701 22:51:55.303245 199091 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49302 <nil> <nil>}
I0701 22:51:55.303263 199091 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220701225037-10065 && echo "pause-20220701225037-10065" | sudo tee /etc/hostname
I0701 22:51:55.427391 199091 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220701225037-10065
I0701 22:51:55.427494 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:55.466615 199091 main.go:134] libmachine: Using SSH client type: native
I0701 22:51:55.466792 199091 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49302 <nil> <nil>}
I0701 22:51:55.466821 199091 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220701225037-10065' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220701225037-10065/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220701225037-10065' | sudo tee -a /etc/hosts;
fi
fi
I0701 22:51:55.582773 199091 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 22:51:55.582800 199091 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
I0701 22:51:55.582822 199091 ubuntu.go:177] setting up certificates
I0701 22:51:55.582833 199091 provision.go:83] configureAuth start
I0701 22:51:55.582874 199091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220701225037-10065
I0701 22:51:55.620493 199091 provision.go:138] copyHostCerts
I0701 22:51:55.620551 199091 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
I0701 22:51:55.620574 199091 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 22:51:55.620637 199091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
I0701 22:51:55.620744 199091 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
I0701 22:51:55.620761 199091 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 22:51:55.620800 199091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
I0701 22:51:55.620861 199091 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
I0701 22:51:55.620873 199091 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 22:51:55.620926 199091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1675 bytes)
I0701 22:51:55.621060 199091 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.pause-20220701225037-10065 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220701225037-10065]
I0701 22:51:55.766239 199091 provision.go:172] copyRemoteCerts
I0701 22:51:55.766300 199091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 22:51:55.766344 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:55.807952 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:51:55.895545 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 22:51:55.913061 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 22:51:55.931306 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0701 22:51:55.950233 199091 provision.go:86] duration metric: configureAuth took 367.390088ms
I0701 22:51:55.950256 199091 ubuntu.go:193] setting minikube options for container-runtime
I0701 22:51:55.950429 199091 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:51:55.950478 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:55.984439 199091 main.go:134] libmachine: Using SSH client type: native
I0701 22:51:55.984620 199091 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49302 <nil> <nil>}
I0701 22:51:55.984646 199091 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 22:51:56.143739 199091 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0701 22:51:56.143766 199091 ubuntu.go:71] root file system type: overlay
I0701 22:51:56.143974 199091 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 22:51:56.144036 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.183145 199091 main.go:134] libmachine: Using SSH client type: native
I0701 22:51:56.183342 199091 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49302 <nil> <nil>}
I0701 22:51:56.183476 199091 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 22:51:56.308132 199091 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 22:51:56.308231 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.347479 199091 main.go:134] libmachine: Using SSH client type: native
I0701 22:51:56.347668 199091 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49302 <nil> <nil>}
I0701 22:51:56.347694 199091 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 22:51:56.467592 199091 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 22:51:56.467619 199091 machine.go:91] provisioned docker machine in 1.200054464s
I0701 22:51:56.467632 199091 start.go:306] post-start starting for "pause-20220701225037-10065" (driver="docker")
I0701 22:51:56.467639 199091 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 22:51:56.467697 199091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 22:51:56.467741 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.506801 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:51:56.636237 199091 ssh_runner.go:195] Run: cat /etc/os-release
I0701 22:51:56.639010 199091 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 22:51:56.639036 199091 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 22:51:56.639050 199091 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 22:51:56.639058 199091 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0701 22:51:56.639068 199091 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
I0701 22:51:56.639123 199091 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
I0701 22:51:56.639209 199091 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem -> 100652.pem in /etc/ssl/certs
I0701 22:51:56.639324 199091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 22:51:56.645566 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:51:56.661583 199091 start.go:309] post-start completed in 193.940674ms
I0701 22:51:56.661643 199091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0701 22:51:56.661685 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.698549 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:51:56.784359 199091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0701 22:51:56.789651 199091 fix.go:57] fixHost completed within 1.559571352s
I0701 22:51:56.789672 199091 start.go:81] releasing machines lock for "pause-20220701225037-10065", held for 1.559617969s
I0701 22:51:56.789744 199091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220701225037-10065
I0701 22:51:56.821635 199091 ssh_runner.go:195] Run: systemctl --version
I0701 22:51:56.821676 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.821718 199091 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 22:51:56.821779 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:51:56.855845 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:51:56.857194 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:51:56.959266 199091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 22:51:56.969901 199091 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0701 22:51:56.969958 199091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 22:51:56.981297 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 22:51:56.993079 199091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 22:51:57.110554 199091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 22:51:57.231166 199091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:51:57.344681 199091 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 22:52:19.972389 199091 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.627666847s)
I0701 22:52:19.972448 199091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0701 22:52:20.204900 199091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:20.399370 199091 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0701 22:52:20.428735 199091 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0701 22:52:20.428802 199091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0701 22:52:20.433862 199091 start.go:471] Will wait 60s for crictl version
I0701 22:52:20.433917 199091 ssh_runner.go:195] Run: sudo crictl version
I0701 22:52:20.496660 199091 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0701 22:52:20.496733 199091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:20.625438 199091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:20.743395 199091 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0701 22:52:20.743511 199091 cli_runner.go:164] Run: docker network inspect pause-20220701225037-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:20.786161 199091 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0701 22:52:20.790736 199091 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:20.790805 199091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:20.843652 199091 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:20.843682 199091 docker.go:533] Images already preloaded, skipping extraction
I0701 22:52:20.843737 199091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:20.913902 199091 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:20.913928 199091 cache_images.go:84] Images are preloaded, skipping loading
I0701 22:52:20.913971 199091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 22:52:21.045395 199091 cni.go:95] Creating CNI manager for ""
I0701 22:52:21.045419 199091 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:21.045429 199091 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 22:52:21.045449 199091 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220701225037-10065 NodeName:pause-20220701225037-10065 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 22:52:21.045605 199091 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220701225037-10065"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 22:52:21.045681 199091 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220701225037-10065 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 22:52:21.045732 199091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0701 22:52:21.060057 199091 binaries.go:44] Found k8s binaries, skipping transfer
I0701 22:52:21.060126 199091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 22:52:21.068803 199091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0701 22:52:21.133316 199091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 22:52:21.166696 199091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
I0701 22:52:21.230832 199091 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0701 22:52:21.235379 199091 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065 for IP: 192.168.67.2
I0701 22:52:21.235515 199091 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 22:52:21.235564 199091 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 22:52:21.235670 199091 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.key
I0701 22:52:21.235750 199091 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.key.c7fa3a9e
I0701 22:52:21.235801 199091 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.key
I0701 22:52:21.235942 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem (1338 bytes)
W0701 22:52:21.235993 199091 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065_empty.pem, impossibly tiny 0 bytes
I0701 22:52:21.236011 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 22:52:21.236043 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 22:52:21.236079 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 22:52:21.236111 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1675 bytes)
I0701 22:52:21.236161 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:21.236978 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 22:52:21.256600 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 22:52:21.275827 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 22:52:21.294154 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0701 22:52:21.313598 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 22:52:21.334099 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 22:52:21.356557 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 22:52:21.376123 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 22:52:21.395642 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /usr/share/ca-certificates/100652.pem (1708 bytes)
I0701 22:52:21.415205 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 22:52:21.435442 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem --> /usr/share/ca-certificates/10065.pem (1338 bytes)
I0701 22:52:21.532823 199091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 22:52:21.547807 199091 ssh_runner.go:195] Run: openssl version
I0701 22:52:21.553275 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 22:52:21.561835 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.565288 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.565332 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.571630 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 22:52:21.580603 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10065.pem && ln -fs /usr/share/ca-certificates/10065.pem /etc/ssl/certs/10065.pem"
I0701 22:52:21.590102 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10065.pem
I0701 22:52:21.593954 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10065.pem
I0701 22:52:21.594004 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10065.pem
I0701 22:52:21.600268 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10065.pem /etc/ssl/certs/51391683.0"
I0701 22:52:21.608969 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100652.pem && ln -fs /usr/share/ca-certificates/100652.pem /etc/ssl/certs/100652.pem"
I0701 22:52:21.618325 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100652.pem
I0701 22:52:21.622422 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100652.pem
I0701 22:52:21.622480 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100652.pem
I0701 22:52:21.627406 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100652.pem /etc/ssl/certs/3ec20f2e.0"
I0701 22:52:21.634718 199091 kubeadm.go:395] StartCluster: {Name:pause-20220701225037-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:21.634830 199091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:21.674237 199091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 22:52:21.681624 199091 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0701 22:52:21.681650 199091 kubeadm.go:626] restartCluster start
I0701 22:52:21.681695 199091 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 22:52:21.688813 199091 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 22:52:21.689505 199091 kubeconfig.go:92] found "pause-20220701225037-10065" server: "https://192.168.67.2:8443"
I0701 22:52:21.690143 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:21.690795 199091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 22:52:21.698182 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:21.698225 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:21.706287 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:21.906854 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:21.906928 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:21.916639 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.106884 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.106956 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.115595 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.306884 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.306961 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.319715 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.507047 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.507114 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.522631 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.706944 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.707011 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.718928 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.907178 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.907259 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.916713 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.106997 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.107061 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.117014 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.307393 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.307497 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.316259 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.506470 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.506544 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.515173 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.706363 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.706430 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.715135 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.906368 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.906439 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.916053 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.107344 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.107403 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.116006 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.307351 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.307417 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.316084 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.507335 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.507417 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.516495 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.706656 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.706736 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.715592 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.715616 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.715645 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.724213 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.724235 199091 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0701 22:52:24.724241 199091 kubeadm.go:1092] stopping kube-system containers ...
I0701 22:52:24.724290 199091 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:24.757723 199091 docker.go:434] Stopping containers: [d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124]
I0701 22:52:24.757782 199091 ssh_runner.go:195] Run: docker stop d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124
I0701 22:52:26.122573 199091 ssh_runner.go:235] Completed: docker stop d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124: (1.364751288s)
I0701 22:52:26.122632 199091 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0701 22:52:26.231420 199091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:52:26.240726 199091 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jul 1 22:51 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Jul 1 22:51 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2043 Jul 1 22:51 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Jul 1 22:51 /etc/kubernetes/scheduler.conf
I0701 22:52:26.240792 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0701 22:52:26.249023 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0701 22:52:26.257214 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0701 22:52:26.265389 199091 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0701 22:52:26.265446 199091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0701 22:52:26.273093 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0701 22:52:26.281570 199091 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0701 22:52:26.281619 199091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0701 22:52:26.289436 199091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:52:26.297877 199091 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0701 22:52:26.297898 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:26.345202 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.467310 199091 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.12207146s)
I0701 22:52:27.467343 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.702053 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.771747 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.888423 199091 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:27.888480 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:28.399008 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:28.898757 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:29.399284 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:29.420605 199091 api_server.go:71] duration metric: took 1.532180215s to wait for apiserver process to appear ...
I0701 22:52:29.420637 199091 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:29.420652 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:29.420955 199091 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0701 22:52:29.921511 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:33.603769 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0701 22:52:33.603807 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0701 22:52:33.922213 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:33.931303 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 22:52:33.931336 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 22:52:34.421974 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:34.430988 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 22:52:34.431021 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 22:52:34.921923 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:35.019944 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0701 22:52:35.029329 199091 api_server.go:140] control plane version: v1.24.2
I0701 22:52:35.029355 199091 api_server.go:130] duration metric: took 5.608711067s to wait for apiserver health ...
I0701 22:52:35.029365 199091 cni.go:95] Creating CNI manager for ""
I0701 22:52:35.029374 199091 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:35.029383 199091 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:35.235976 199091 system_pods.go:59] 6 kube-system pods found
I0701 22:52:35.236014 199091 system_pods.go:61] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:35.236027 199091 system_pods.go:61] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0701 22:52:35.236036 199091 system_pods.go:61] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:35.236050 199091 system_pods.go:61] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0701 22:52:35.236060 199091 system_pods.go:61] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:35.236070 199091 system_pods.go:61] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0701 22:52:35.236078 199091 system_pods.go:74] duration metric: took 206.689349ms to wait for pod list to return data ...
I0701 22:52:35.236089 199091 node_conditions.go:102] verifying NodePressure condition ...
I0701 22:52:35.324763 199091 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 22:52:35.324798 199091 node_conditions.go:123] node cpu capacity is 8
I0701 22:52:35.324812 199091 node_conditions.go:105] duration metric: took 88.717509ms to run NodePressure ...
I0701 22:52:35.324836 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:36.623041 199091 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.298185003s)
I0701 22:52:36.623078 199091 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0701 22:52:36.628299 199091 kubeadm.go:777] kubelet initialised
I0701 22:52:36.628326 199091 kubeadm.go:778] duration metric: took 5.235635ms waiting for restarted kubelet to initialise ...
I0701 22:52:36.628334 199091 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:36.634066 199091 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:38.647755 199091 pod_ready.go:102] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:41.146909 199091 pod_ready.go:102] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:42.644603 199091 pod_ready.go:92] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:42.644631 199091 pod_ready.go:81] duration metric: took 6.010537303s waiting for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:42.644641 199091 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:44.653983 199091 pod_ready.go:102] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:46.655609 199091 pod_ready.go:102] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:49.154945 199091 pod_ready.go:92] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.154979 199091 pod_ready.go:81] duration metric: took 6.510331143s waiting for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.154993 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.158912 199091 pod_ready.go:92] pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.158932 199091 pod_ready.go:81] duration metric: took 3.929952ms waiting for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.158944 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.162788 199091 pod_ready.go:92] pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.162806 199091 pod_ready.go:81] duration metric: took 3.854918ms waiting for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.162814 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.166657 199091 pod_ready.go:92] pod "kube-proxy-2rj2j" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.166675 199091 pod_ready.go:81] duration metric: took 3.856564ms waiting for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.166682 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.170222 199091 pod_ready.go:92] pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.170238 199091 pod_ready.go:81] duration metric: took 3.550181ms waiting for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.170244 199091 pod_ready.go:38] duration metric: took 12.541901866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:49.170257 199091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0701 22:52:49.177370 199091 ops.go:34] apiserver oom_adj: -16
I0701 22:52:49.177396 199091 kubeadm.go:630] restartCluster took 27.495733284s
I0701 22:52:49.177403 199091 kubeadm.go:397] StartCluster complete in 27.542694024s
I0701 22:52:49.177417 199091 settings.go:142] acquiring lock: {Name:mk46f1228f0a7b30ad1ce5ce48145fbdcfa93542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.177504 199091 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:49.178553 199091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk40c1a74a65307876af762788c72bf321eefc27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.179541 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.181748 199091 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220701225037-10065" rescaled to 1
I0701 22:52:49.181800 199091 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:49.184498 199091 out.go:177] * Verifying Kubernetes components...
I0701 22:52:49.181825 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0701 22:52:49.181876 199091 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0701 22:52:49.182002 199091 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:49.185869 199091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:49.185995 199091 addons.go:65] Setting storage-provisioner=true in profile "pause-20220701225037-10065"
I0701 22:52:49.186027 199091 addons.go:153] Setting addon storage-provisioner=true in "pause-20220701225037-10065"
W0701 22:52:49.186035 199091 addons.go:162] addon storage-provisioner should already be in state true
I0701 22:52:49.186082 199091 host.go:66] Checking if "pause-20220701225037-10065" exists ...
I0701 22:52:49.186312 199091 addons.go:65] Setting default-storageclass=true in profile "pause-20220701225037-10065"
I0701 22:52:49.186335 199091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220701225037-10065"
I0701 22:52:49.186575 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.186592 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.229553 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.232766 199091 addons.go:153] Setting addon default-storageclass=true in "pause-20220701225037-10065"
W0701 22:52:49.232797 199091 addons.go:162] addon default-storageclass should already be in state true
I0701 22:52:49.232832 199091 host.go:66] Checking if "pause-20220701225037-10065" exists ...
I0701 22:52:49.233350 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.238093 199091 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:49.239724 199091 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.239752 199091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0701 22:52:49.239819 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:52:49.258674 199091 node_ready.go:35] waiting up to 6m0s for node "pause-20220701225037-10065" to be "Ready" ...
I0701 22:52:49.258713 199091 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0701 22:52:49.275934 199091 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:49.275959 199091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0701 22:52:49.276022 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:52:49.282775 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:52:49.314420 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:52:49.353393 199091 node_ready.go:49] node "pause-20220701225037-10065" has status "Ready":"True"
I0701 22:52:49.353421 199091 node_ready.go:38] duration metric: took 94.715118ms waiting for node "pause-20220701225037-10065" to be "Ready" ...
I0701 22:52:49.353431 199091 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:49.376793 199091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.411064 199091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:49.588118 199091 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.952992 199091 pod_ready.go:92] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.953018 199091 pod_ready.go:81] duration metric: took 364.874291ms waiting for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.953030 199091 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.353081 199091 pod_ready.go:92] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:50.353117 199091 pod_ready.go:81] duration metric: took 400.078131ms waiting for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.353138 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.442105 199091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065268006s)
I0701 22:52:50.442189 199091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.031095993s)
I0701 22:52:50.443946 199091 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0701 22:52:50.445295 199091 addons.go:414] enableAddons completed in 1.263448076s
I0701 22:52:50.752295 199091 pod_ready.go:92] pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:50.752319 199091 pod_ready.go:81] duration metric: took 399.166858ms waiting for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.752332 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.153325 199091 pod_ready.go:92] pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.153350 199091 pod_ready.go:81] duration metric: took 401.010379ms waiting for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.153363 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.555026 199091 pod_ready.go:92] pod "kube-proxy-2rj2j" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.555054 199091 pod_ready.go:81] duration metric: took 401.682852ms waiting for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.555067 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.952530 199091 pod_ready.go:92] pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.952551 199091 pod_ready.go:81] duration metric: took 397.476742ms waiting for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.952568 199091 pod_ready.go:38] duration metric: took 2.599125631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:51.952588 199091 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:51.952624 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:51.962068 199091 api_server.go:71] duration metric: took 2.780244206s to wait for apiserver process to appear ...
I0701 22:52:51.962095 199091 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:51.962107 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:51.966185 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0701 22:52:51.966897 199091 api_server.go:140] control plane version: v1.24.2
I0701 22:52:51.966915 199091 api_server.go:130] duration metric: took 4.814015ms to wait for apiserver health ...
I0701 22:52:51.966922 199091 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:52.155235 199091 system_pods.go:59] 7 kube-system pods found
I0701 22:52:52.155266 199091 system_pods.go:61] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:52.155272 199091 system_pods.go:61] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running
I0701 22:52:52.155278 199091 system_pods.go:61] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:52.155285 199091 system_pods.go:61] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running
I0701 22:52:52.155291 199091 system_pods.go:61] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:52.155297 199091 system_pods.go:61] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running
I0701 22:52:52.155305 199091 system_pods.go:61] "storage-provisioner" [54985022-a6cd-4c59-af65-805d97e94819] Running
I0701 22:52:52.155312 199091 system_pods.go:74] duration metric: took 188.385275ms to wait for pod list to return data ...
I0701 22:52:52.155326 199091 default_sa.go:34] waiting for default service account to be created ...
I0701 22:52:52.353585 199091 default_sa.go:45] found service account: "default"
I0701 22:52:52.353608 199091 default_sa.go:55] duration metric: took 198.272792ms for default service account to be created ...
I0701 22:52:52.353617 199091 system_pods.go:116] waiting for k8s-apps to be running ...
I0701 22:52:52.554929 199091 system_pods.go:86] 7 kube-system pods found
I0701 22:52:52.554961 199091 system_pods.go:89] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:52.554969 199091 system_pods.go:89] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running
I0701 22:52:52.554975 199091 system_pods.go:89] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:52.554981 199091 system_pods.go:89] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running
I0701 22:52:52.554986 199091 system_pods.go:89] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:52.554993 199091 system_pods.go:89] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running
I0701 22:52:52.555000 199091 system_pods.go:89] "storage-provisioner" [54985022-a6cd-4c59-af65-805d97e94819] Running
I0701 22:52:52.555009 199091 system_pods.go:126] duration metric: took 201.38641ms to wait for k8s-apps to be running ...
I0701 22:52:52.555023 199091 system_svc.go:44] waiting for kubelet service to be running ....
I0701 22:52:52.555071 199091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:52.564406 199091 system_svc.go:56] duration metric: took 9.380785ms WaitForService to wait for kubelet.
I0701 22:52:52.564428 199091 kubeadm.go:572] duration metric: took 3.38260708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0701 22:52:52.564447 199091 node_conditions.go:102] verifying NodePressure condition ...
I0701 22:52:52.752008 199091 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 22:52:52.752029 199091 node_conditions.go:123] node cpu capacity is 8
I0701 22:52:52.752039 199091 node_conditions.go:105] duration metric: took 187.588064ms to run NodePressure ...
I0701 22:52:52.752050 199091 start.go:216] waiting for startup goroutines ...
I0701 22:52:52.791381 199091 start.go:506] kubectl: 1.24.2, cluster: 1.24.2 (minor skew: 0)
I0701 22:52:52.793212 199091 out.go:177] * Done! kubectl is now configured to use "pause-20220701225037-10065" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-20220701225037-10065
helpers_test.go:235: (dbg) docker inspect pause-20220701225037-10065:
-- stdout --
[
{
"Id": "6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333",
"Created": "2022-07-01T22:51:02.300245205Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 179327,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-01T22:51:02.896812642Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
"ResolvConfPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/hostname",
"HostsPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/hosts",
"LogPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333-json.log",
"Name": "/pause-20220701225037-10065",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20220701225037-10065:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20220701225037-10065",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10-init/diff:/var/lib/docker/overlay2/5ebfdebe4e44329780b59de0c8b0bf968018a1b5ca93874176282e5d48d8b4db/diff:/var/lib/docker/overlay2/a9c49a4053225d2ebd241f2745fdb18a43efefbab32cf6272e8d78d867eb875b/diff:/var/lib/docker/overlay2/0802fcdede86d6719023f3e2022375a3aa09436b5608d6e9fb20c7b421c40a53/diff:/var/lib/docker/overlay2/fefaf4af75695592d1b8cb09d5ce0ff06c9ec6ca405ab6634301a6372ff90fb0/diff:/var/lib/docker/overlay2/523e8a6a67ba3ae86bf70befa1ddfb565da9d640492c57c39213b01eae3c45bb/diff:/var/lib/docker/overlay2/01825d9999ae652487354dbb98195f290170df60b824e0640efbad3b57057fe5/diff:/var/lib/docker/overlay2/aef6dee284ba27a78360c962618cc5f5921d5df9d4f9cee3e1e05aa7385cae2e/diff:/var/lib/docker/overlay2/d09388e767dcebde123a39eb77d2767b334ffed162b0013c549be8cfafaf32f1/diff:/var/lib/docker/overlay2/a961c54dbc25723780f6af5c7257c9131c92c20cbae5fdb309a448a04177fb0d/diff:/var/lib/docker/overlay2/070954
da53f10d594c7db858ceee957f45dcc290e20fd38e5d2ae3ee6d32a509/diff:/var/lib/docker/overlay2/cf6729cace23a11c96ef50c2039fbe915ea3375a5eea2cc505a97ee37144f10b/diff:/var/lib/docker/overlay2/bb5aa1c8e98214b00a8ca54e8c73310989228e658165d383142a35f522afd361/diff:/var/lib/docker/overlay2/a47fe538fad9a10ad54bda1ed2c2db3d6f7515279f5793af55de9b603f32cc38/diff:/var/lib/docker/overlay2/7aa9fa6b1d74c93745eb01c008d86447d556fffffec606a6517ddd7debc0e0ce/diff:/var/lib/docker/overlay2/105c0e50338102d95508115a573be5ad60e7ce3c240dfa4925d2485bd7550ff1/diff:/var/lib/docker/overlay2/c635bf001d9cfba6946f0a7acd8a209d33c7a4fd24004260b9674c2f4cfe3225/diff:/var/lib/docker/overlay2/5b7b2968c2b74d88b68c69896db41b100a7b4f657c4847b630d3b6385435c736/diff:/var/lib/docker/overlay2/00e793fd0209aee8ea522c9f888a1504bdf3f110a6b59767117491d2f73ded51/diff:/var/lib/docker/overlay2/06582d415f14a950df0d932d005adba6b7bdef9b03e7ec96cd9ee0f3e4f88186/diff:/var/lib/docker/overlay2/d90b5a2b218ac3ce4ee84214f7cc5d9f0cfb4de5cceb562de24197fc3fe97252/diff:/var/lib/d
ocker/overlay2/1d6b6e5d2af72440a4ffe851359e0fcd180b6230c1bbdc6471e1e311550d2af8/diff:/var/lib/docker/overlay2/43098fdc498ae414f4e85d3f2ad689f15233c4149f38411bcdde8c0c6858b45a/diff:/var/lib/docker/overlay2/3dee36596b8505375a1dbe51da977c260f679f20a286b38a4f47fb94bf95483e/diff:/var/lib/docker/overlay2/4365a3944f40a62fd04dc6c3a1f6fc50b645e83950cb5f65afd99ae47b29dcf9/diff:/var/lib/docker/overlay2/10d86d22181d1ff7d3cf42653b6656d6d4e285c1fc95f4a0e3b228c23cf01c2a/diff:/var/lib/docker/overlay2/adba91f6364e8d3eafcc2f1921be64caa35af120fd78598b34158330f1b07c11/diff:/var/lib/docker/overlay2/b11dac8829c82d605c4c9aa2e461e88f5c53fe9ea03f0346a29a84006b96572f/diff:/var/lib/docker/overlay2/a8542b5e868fc08d56cebacdbc3ac16bef43ba9dbb70582466e031f13e2e369c/diff:/var/lib/docker/overlay2/5a7d32bcfb9e1f040b36571d7c2cb9c85eeba09cbc900808cb340a0690d76b53/diff:/var/lib/docker/overlay2/39b83f88bb66f5b127c544d4e4c52cb02acef43dc7d39d5c1739766c7a412049/diff:/var/lib/docker/overlay2/aa7e1d59944cb05594b182c96ad9e4e96d2caf7b22b208ace35452f0017
0f188/diff:/var/lib/docker/overlay2/9428da6997644cd26c066788b084b9abf00b4fbcab734b62b5e135ce3c26e6c6/diff:/var/lib/docker/overlay2/8e5398d669dc8937e39f7dd4dd9fe88f23d8d0408bb7e88f2fcf26f277e57ed7/diff:/var/lib/docker/overlay2/b1ca9bb6fe129d779d179c77dc675fe715e3efe2089cd22455f23076ea6d09e5/diff:/var/lib/docker/overlay2/f8dcad825e8399dc23061b3c8e0ed4867960cdfc9c50a08f2151920b070b150e/diff:/var/lib/docker/overlay2/4b5dcd090442aa9f2a952de485202e6da12be1f754edcc4bb1e179d651d71fc6/diff:/var/lib/docker/overlay2/23101e237652ba79b16635a2274893cd7e3ddf64fed56ef622669a79653e325b/diff:/var/lib/docker/overlay2/0c0e5d0c6ae6c618678469f0a52205dd4f46a14aded01fdcea8aa29f7a5ef810/diff:/var/lib/docker/overlay2/fdc530c0025cfd7b5d7995c60e81f48e9e8b53dacc5ce33a06c63ea380ab7364/diff:/var/lib/docker/overlay2/b88e0fc2e685a4af24fb7b1bd918a66cf2b17d9e94befd1a58d79580164b5002/diff:/var/lib/docker/overlay2/e7d090aef23d3aafdc818f796a577e07c009fae5593337bee3b45a27008c9b8f/diff",
"MergedDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/merged",
"UpperDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/diff",
"WorkDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20220701225037-10065",
"Source": "/var/lib/docker/volumes/pause-20220701225037-10065/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20220701225037-10065",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20220701225037-10065",
"name.minikube.sigs.k8s.io": "pause-20220701225037-10065",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "72e452ed1271ed755b19576efdfb23a166e0e8112cb1fa2665eea6db99922b76",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49302"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49301"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49296"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49300"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49298"
}
]
},
"SandboxKey": "/var/run/docker/netns/72e452ed1271",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20220701225037-10065": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"6fdd3cdb5625",
"pause-20220701225037-10065"
],
"NetworkID": "3b47d047354072632c767ea8f5e73418621d396187310dd665246be007cd885d",
"EndpointID": "2035834098a4d2a78b597653e3d023565d83a47ebdad2e54bd19a457098a3cfe",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220701225037-10065 -n pause-20220701225037-10065
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-20220701225037-10065 logs -n 25
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220701225037-10065 logs -n 25: (1.917583882s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:50 UTC |
| | skaffold-20220701224920-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | |
| | insufficient-storage-20220701225024-10065 | | | | | |
| | --memory=2048 --output=json | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:50 UTC |
| | insufficient-storage-20220701225024-10065 | | | | | |
| start | -p pause-20220701225037-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | offline-docker-20220701225037-10065 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | stopped-upgrade-20220701225037-10065 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| profile | list | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | offline-docker-20220701225037-10065 | | | | | |
| start | -p pause-20220701225037-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| profile | list --output=json | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| stop | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | kubernetes-upgrade-20220701225208-10065 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | stopped-upgrade-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | force-systemd-flag-20220701225213-10065 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-flag-20220701225213-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | force-systemd-flag-20220701225213-10065 | | | | | |
|---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/01 22:52:13
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 22:52:13.624063 208328 out.go:296] Setting OutFile to fd 1 ...
I0701 22:52:13.624293 208328 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:13.624337 208328 out.go:309] Setting ErrFile to fd 2...
I0701 22:52:13.624355 208328 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:13.624992 208328 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
I0701 22:52:13.625381 208328 out.go:303] Setting JSON to false
I0701 22:52:13.627761 208328 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2086,"bootTime":1656713848,"procs":1077,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 22:52:13.627853 208328 start.go:125] virtualization: kvm guest
I0701 22:52:13.630623 208328 out.go:177] * [force-systemd-flag-20220701225213-10065] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0701 22:52:13.632221 208328 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:52:13.632171 208328 notify.go:193] Checking for updates...
I0701 22:52:13.634901 208328 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:52:13.636457 208328 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:13.638025 208328 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
I0701 22:52:13.639542 208328 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 22:52:13.641423 208328 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225208-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0701 22:52:13.641540 208328 config.go:178] Loaded profile config "missing-upgrade-20220701225156-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
I0701 22:52:13.641666 208328 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:13.641724 208328 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:52:13.685784 208328 docker.go:137] docker version: linux-20.10.17
I0701 22:52:13.685887 208328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:13.806880 208328 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:85 SystemTime:2022-07-01 22:52:13.721514159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:13.807068 208328 docker.go:254] overlay module found
I0701 22:52:13.809716 208328 out.go:177] * Using the docker driver based on user configuration
I0701 22:52:13.811062 208328 start.go:284] selected driver: docker
I0701 22:52:13.811075 208328 start.go:808] validating driver "docker" against <nil>
I0701 22:52:13.811091 208328 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:52:13.811885 208328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:13.931485 208328 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:85 SystemTime:2022-07-01 22:52:13.845334012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:13.931658 208328 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0701 22:52:13.931881 208328 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0701 22:52:13.934097 208328 out.go:177] * Using Docker driver with root privileges
I0701 22:52:13.935534 208328 cni.go:95] Creating CNI manager for ""
I0701 22:52:13.935559 208328 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:13.935572 208328 start_flags.go:310] config:
{Name:force-systemd-flag-20220701225213-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:force-systemd-flag-20220701225213-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:13.937361 208328 out.go:177] * Starting control plane node force-systemd-flag-20220701225213-10065 in cluster force-systemd-flag-20220701225213-10065
I0701 22:52:13.938730 208328 cache.go:120] Beginning downloading kic base image for docker with docker
I0701 22:52:13.940301 208328 out.go:177] * Pulling base image ...
I0701 22:52:13.941639 208328 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:13.941688 208328 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
I0701 22:52:13.941702 208328 cache.go:57] Caching tarball of preloaded images
I0701 22:52:13.941733 208328 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
I0701 22:52:13.941976 208328 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 22:52:13.941998 208328 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on docker
I0701 22:52:13.942121 208328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/config.json ...
I0701 22:52:13.942153 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/config.json: {Name:mk361980e4ceee9ecce5ed4150e5e0664c384acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:13.978375 208328 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
I0701 22:52:13.978408 208328 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
I0701 22:52:13.978430 208328 cache.go:208] Successfully downloaded all kic artifacts
I0701 22:52:13.978479 208328 start.go:352] acquiring machines lock for force-systemd-flag-20220701225213-10065: {Name:mk1b80834debd083d4675d3d746669d861ebc275 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 22:52:13.978615 208328 start.go:356] acquired machines lock for "force-systemd-flag-20220701225213-10065" in 111.464µs
I0701 22:52:13.978647 208328 start.go:91] Provisioning new machine with config: &{Name:force-systemd-flag-20220701225213-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:force-systemd-flag-2022070122521
3-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:13.978749 208328 start.go:131] createHost starting for "" (driver="docker")
I0701 22:52:13.980995 208328 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0701 22:52:13.981248 208328 start.go:165] libmachine.API.Create for "force-systemd-flag-20220701225213-10065" (driver="docker")
I0701 22:52:13.981282 208328 client.go:168] LocalClient.Create starting
I0701 22:52:13.981371 208328 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem
I0701 22:52:13.981417 208328 main.go:134] libmachine: Decoding PEM data...
I0701 22:52:13.981445 208328 main.go:134] libmachine: Parsing certificate...
I0701 22:52:13.981511 208328 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem
I0701 22:52:13.981538 208328 main.go:134] libmachine: Decoding PEM data...
I0701 22:52:13.981559 208328 main.go:134] libmachine: Parsing certificate...
I0701 22:52:13.981987 208328 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220701225213-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0701 22:52:14.016654 208328 cli_runner.go:211] docker network inspect force-systemd-flag-20220701225213-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0701 22:52:14.016719 208328 network_create.go:272] running [docker network inspect force-systemd-flag-20220701225213-10065] to gather additional debugging logs...
I0701 22:52:14.016741 208328 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220701225213-10065
W0701 22:52:14.053953 208328 cli_runner.go:211] docker network inspect force-systemd-flag-20220701225213-10065 returned with exit code 1
I0701 22:52:14.053988 208328 network_create.go:275] error running [docker network inspect force-systemd-flag-20220701225213-10065]: docker network inspect force-systemd-flag-20220701225213-10065: exit status 1
stdout:
[]
stderr:
Error: No such network: force-systemd-flag-20220701225213-10065
I0701 22:52:14.054003 208328 network_create.go:277] output of [docker network inspect force-systemd-flag-20220701225213-10065]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: force-systemd-flag-20220701225213-10065
** /stderr **
I0701 22:52:14.054061 208328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:14.090359 208328 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-90a9139d3589 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8b:77:20:a6}}
I0701 22:52:14.091345 208328 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-e1f59258eef7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bc:97:4d:51}}
I0701 22:52:14.092194 208328 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-3b47d0473540 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:73:30:37:35}}
I0701 22:52:14.092962 208328 network.go:240] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-ac5de5950252 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:51:aa:9f:68}}
I0701 22:52:14.093871 208328 network.go:288] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00044e488] misses:0}
I0701 22:52:14.093916 208328 network.go:235] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0701 22:52:14.093933 208328 network_create.go:115] attempt to create docker network force-systemd-flag-20220701225213-10065 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0701 22:52:14.094023 208328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 force-systemd-flag-20220701225213-10065
I0701 22:52:14.573417 208328 network_create.go:99] docker network force-systemd-flag-20220701225213-10065 192.168.85.0/24 created
I0701 22:52:14.573454 208328 kic.go:106] calculated static IP "192.168.85.2" for the "force-systemd-flag-20220701225213-10065" container
I0701 22:52:14.573525 208328 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0701 22:52:14.611029 208328 cli_runner.go:164] Run: docker volume create force-systemd-flag-20220701225213-10065 --label name.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 --label created_by.minikube.sigs.k8s.io=true
I0701 22:52:14.756608 208328 oci.go:103] Successfully created a docker volume force-systemd-flag-20220701225213-10065
I0701 22:52:14.756708 208328 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-20220701225213-10065-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 --entrypoint /usr/bin/test -v force-systemd-flag-20220701225213-10065:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
I0701 22:52:19.972389 199091 ssh_runner.go:235] Completed: sudo systemctl restart docker: (22.627666847s)
I0701 22:52:19.972448 199091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0701 22:52:20.204900 199091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:20.399370 199091 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0701 22:52:20.428735 199091 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0701 22:52:20.428802 199091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0701 22:52:20.433862 199091 start.go:471] Will wait 60s for crictl version
I0701 22:52:20.433917 199091 ssh_runner.go:195] Run: sudo crictl version
I0701 22:52:20.496660 199091 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0701 22:52:20.496733 199091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:20.625438 199091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:19.036237 206763 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220701225208-10065:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (6.811992954s)
I0701 22:52:19.036267 206763 kic.go:188] duration metric: took 6.812154 seconds to extract preloaded images to volume
W0701 22:52:19.036387 206763 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0701 22:52:19.036499 206763 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0701 22:52:19.233728 206763 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220701225208-10065 --name kubernetes-upgrade-20220701225208-10065 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220701225208-10065 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220701225208-10065 --network kubernetes-upgrade-20220701225208-10065 --ip 192.168.76.2 --volume kubernetes-upgrade-20220701225208-10065:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
I0701 22:52:19.825829 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Running}}
I0701 22:52:19.880165 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:19.923366 206763 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220701225208-10065 stat /var/lib/dpkg/alternatives/iptables
I0701 22:52:20.065379 206763 oci.go:144] the created container "kubernetes-upgrade-20220701225208-10065" has a running status.
I0701 22:52:20.065413 206763 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa...
I0701 22:52:20.312293 206763 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0701 22:52:20.430586 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:20.484772 206763 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0701 22:52:20.484802 206763 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220701225208-10065 chown docker:docker /home/docker/.ssh/authorized_keys]
I0701 22:52:20.643450 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:20.686919 206763 machine.go:88] provisioning docker machine ...
I0701 22:52:20.686961 206763 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220701225208-10065"
I0701 22:52:20.687018 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:20.738807 206763 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:20.739056 206763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49331 <nil> <nil>}
I0701 22:52:20.739083 206763 main.go:134] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-20220701225208-10065 && echo "kubernetes-upgrade-20220701225208-10065" | sudo tee /etc/hostname
I0701 22:52:21.061481 206763 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.061549 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.097926 206763 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:21.098098 206763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49331 <nil> <nil>}
I0701 22:52:21.098120 206763 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-20220701225208-10065' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220701225208-10065/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-20220701225208-10065' | sudo tee -a /etc/hosts;
fi
fi
I0701 22:52:21.214828 206763 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 22:52:21.214858 206763 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
I0701 22:52:21.214879 206763 ubuntu.go:177] setting up certificates
I0701 22:52:21.214886 206763 provision.go:83] configureAuth start
I0701 22:52:21.214927 206763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.258099 206763 provision.go:138] copyHostCerts
I0701 22:52:21.258168 206763 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
I0701 22:52:21.258185 206763 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 22:52:21.258256 206763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
I0701 22:52:21.258352 206763 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
I0701 22:52:21.258378 206763 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 22:52:21.258426 206763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
I0701 22:52:21.258504 206763 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
I0701 22:52:21.258519 206763 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 22:52:21.258553 206763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1675 bytes)
I0701 22:52:21.258620 206763 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220701225208-10065 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220701225208-10065]
I0701 22:52:21.537501 206763 provision.go:172] copyRemoteCerts
I0701 22:52:21.537573 206763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 22:52:21.537619 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.581046 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:21.672046 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 22:52:21.692972 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
I0701 22:52:21.710084 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 22:52:21.726727 206763 provision.go:86] duration metric: configureAuth took 511.83212ms
I0701 22:52:21.726752 206763 ubuntu.go:193] setting minikube options for container-runtime
I0701 22:52:21.726922 206763 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225208-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0701 22:52:21.726975 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.759345 206763 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:21.759528 206763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49331 <nil> <nil>}
I0701 22:52:21.759546 206763 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 22:52:21.879346 206763 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0701 22:52:21.879371 206763 ubuntu.go:71] root file system type: overlay
I0701 22:52:21.879552 206763 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 22:52:21.879603 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:21.912934 206763 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:21.913144 206763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49331 <nil> <nil>}
I0701 22:52:21.913244 206763 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 22:52:22.035992 206763 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 22:52:22.036072 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:22.071326 206763 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:22.071531 206763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49331 <nil> <nil>}
I0701 22:52:22.071554 206763 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 22:52:19.765798 208328 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-20220701225213-10065-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 --entrypoint /usr/bin/test -v force-systemd-flag-20220701225213-10065:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib: (5.00903497s)
I0701 22:52:19.765835 208328 oci.go:107] Successfully prepared a docker volume force-systemd-flag-20220701225213-10065
I0701 22:52:19.765866 208328 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:19.765893 208328 kic.go:179] Starting extracting preloaded images to volume ...
I0701 22:52:19.765958 208328 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20220701225213-10065:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
I0701 22:52:20.743395 199091 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0701 22:52:20.743511 199091 cli_runner.go:164] Run: docker network inspect pause-20220701225037-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:20.786161 199091 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0701 22:52:20.790736 199091 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:20.790805 199091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:20.843652 199091 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:20.843682 199091 docker.go:533] Images already preloaded, skipping extraction
I0701 22:52:20.843737 199091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:20.913902 199091 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:20.913928 199091 cache_images.go:84] Images are preloaded, skipping loading
I0701 22:52:20.913971 199091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 22:52:21.045395 199091 cni.go:95] Creating CNI manager for ""
I0701 22:52:21.045419 199091 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:21.045429 199091 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 22:52:21.045449 199091 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220701225037-10065 NodeName:pause-20220701225037-10065 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/m
inikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 22:52:21.045605 199091 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220701225037-10065"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 22:52:21.045681 199091 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220701225037-10065 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 22:52:21.045732 199091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0701 22:52:21.060057 199091 binaries.go:44] Found k8s binaries, skipping transfer
I0701 22:52:21.060126 199091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 22:52:21.068803 199091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0701 22:52:21.133316 199091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 22:52:21.166696 199091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
I0701 22:52:21.230832 199091 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0701 22:52:21.235379 199091 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065 for IP: 192.168.67.2
I0701 22:52:21.235515 199091 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 22:52:21.235564 199091 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 22:52:21.235670 199091 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.key
I0701 22:52:21.235750 199091 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.key.c7fa3a9e
I0701 22:52:21.235801 199091 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.key
I0701 22:52:21.235942 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem (1338 bytes)
W0701 22:52:21.235993 199091 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065_empty.pem, impossibly tiny 0 bytes
I0701 22:52:21.236011 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 22:52:21.236043 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 22:52:21.236079 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 22:52:21.236111 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1675 bytes)
I0701 22:52:21.236161 199091 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:21.236978 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 22:52:21.256600 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 22:52:21.275827 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 22:52:21.294154 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0701 22:52:21.313598 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 22:52:21.334099 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 22:52:21.356557 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 22:52:21.376123 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 22:52:21.395642 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /usr/share/ca-certificates/100652.pem (1708 bytes)
I0701 22:52:21.415205 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 22:52:21.435442 199091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem --> /usr/share/ca-certificates/10065.pem (1338 bytes)
I0701 22:52:21.532823 199091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 22:52:21.547807 199091 ssh_runner.go:195] Run: openssl version
I0701 22:52:21.553275 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 22:52:21.561835 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.565288 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.565332 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:21.571630 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 22:52:21.580603 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10065.pem && ln -fs /usr/share/ca-certificates/10065.pem /etc/ssl/certs/10065.pem"
I0701 22:52:21.590102 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10065.pem
I0701 22:52:21.593954 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10065.pem
I0701 22:52:21.594004 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10065.pem
I0701 22:52:21.600268 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10065.pem /etc/ssl/certs/51391683.0"
I0701 22:52:21.608969 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100652.pem && ln -fs /usr/share/ca-certificates/100652.pem /etc/ssl/certs/100652.pem"
I0701 22:52:21.618325 199091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100652.pem
I0701 22:52:21.622422 199091 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100652.pem
I0701 22:52:21.622480 199091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100652.pem
I0701 22:52:21.627406 199091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100652.pem /etc/ssl/certs/3ec20f2e.0"
I0701 22:52:21.634718 199091 kubeadm.go:395] StartCluster: {Name:pause-20220701225037-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220701225037-10065 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:21.634830 199091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:21.674237 199091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 22:52:21.681624 199091 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0701 22:52:21.681650 199091 kubeadm.go:626] restartCluster start
I0701 22:52:21.681695 199091 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 22:52:21.688813 199091 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 22:52:21.689505 199091 kubeconfig.go:92] found "pause-20220701225037-10065" server: "https://192.168.67.2:8443"
I0701 22:52:21.690143 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:21.690795 199091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 22:52:21.698182 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:21.698225 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:21.706287 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:21.906854 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:21.906928 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:21.916639 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.106884 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.106956 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.115595 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.306884 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.306961 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.319715 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.507047 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.507114 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.522631 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.706944 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.707011 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.718928 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:22.907178 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:22.907259 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:22.916713 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.106997 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.107061 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.117014 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.307393 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.307497 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.316259 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.506470 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.506544 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.515173 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.706363 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.706430 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.715135 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:23.906368 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:23.906439 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:23.916053 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.107344 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.107403 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.116006 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.307351 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.307417 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.316084 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.507335 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.507417 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.516495 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.706656 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.706736 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.715592 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.715616 199091 api_server.go:165] Checking apiserver status ...
I0701 22:52:24.715645 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 22:52:24.724213 199091 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 22:52:24.724235 199091 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0701 22:52:24.724241 199091 kubeadm.go:1092] stopping kube-system containers ...
I0701 22:52:24.724290 199091 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:24.757723 199091 docker.go:434] Stopping containers: [d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124]
I0701 22:52:24.757782 199091 ssh_runner.go:195] Run: docker stop d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124
I0701 22:52:26.435912 206763 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-07-01 22:52:22.028950126 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0701 22:52:26.435950 206763 machine.go:91] provisioned docker machine in 5.749005231s
I0701 22:52:26.435963 206763 client.go:171] LocalClient.Create took 17.393806563s
I0701 22:52:26.435988 206763 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220701225208-10065" took 17.393869702s
I0701 22:52:26.436001 206763 start.go:306] post-start starting for "kubernetes-upgrade-20220701225208-10065" (driver="docker")
I0701 22:52:26.436013 206763 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 22:52:26.436074 206763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 22:52:26.436121 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.475207 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:26.580213 206763 ssh_runner.go:195] Run: cat /etc/os-release
I0701 22:52:26.584079 206763 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 22:52:26.584110 206763 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 22:52:26.584124 206763 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 22:52:26.584136 206763 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0701 22:52:26.584148 206763 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
I0701 22:52:26.584217 206763 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
I0701 22:52:26.584302 206763 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem -> 100652.pem in /etc/ssl/certs
I0701 22:52:26.584409 206763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 22:52:26.591931 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:26.612213 206763 start.go:309] post-start completed in 176.194942ms
I0701 22:52:26.613651 206763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.655989 206763 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/config.json ...
I0701 22:52:26.656252 206763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0701 22:52:26.656303 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.695631 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:26.779955 206763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0701 22:52:26.784309 206763 start.go:134] duration metric: createHost completed in 17.745029055s
I0701 22:52:26.784333 206763 start.go:81] releasing machines lock for "kubernetes-upgrade-20220701225208-10065", held for 17.745167377s
I0701 22:52:26.784410 206763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.826876 206763 ssh_runner.go:195] Run: systemctl --version
I0701 22:52:26.826938 206763 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 22:52:26.826990 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.826942 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:26.869016 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:26.873732 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:26.965145 206763 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 22:52:26.989133 206763 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0701 22:52:26.989195 206763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 22:52:27.032212 206763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0701 22:52:27.047955 206763 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 22:52:27.147114 206763 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 22:52:27.255178 206763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:27.349186 206763 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 22:52:27.709730 206763 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:27.761478 206763 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:27.822309 206763 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
I0701 22:52:27.822410 206763 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220701225208-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:27.857943 206763 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0701 22:52:27.861533 206763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:27.871129 206763 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0701 22:52:27.871194 206763 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:27.919794 206763 docker.go:602] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
-- /stdout --
I0701 22:52:27.919820 206763 docker.go:533] Images already preloaded, skipping extraction
I0701 22:52:27.919870 206763 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:27.958358 206763 docker.go:602] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
-- /stdout --
I0701 22:52:27.958383 206763 cache_images.go:84] Images are preloaded, skipping loading
I0701 22:52:27.958427 206763 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 22:52:28.057275 206763 cni.go:95] Creating CNI manager for ""
I0701 22:52:28.057297 206763 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:28.057306 206763 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 22:52:28.057317 206763 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220701225208-10065 NodeName:kubernetes-upgrade-20220701225208-10065 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 22:52:28.057481 206763 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "kubernetes-upgrade-20220701225208-10065"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: kubernetes-upgrade-20220701225208-10065
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 22:52:28.057577 206763 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220701225208-10065 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220701225208-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 22:52:28.057639 206763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
I0701 22:52:28.064939 206763 binaries.go:44] Found k8s binaries, skipping transfer
I0701 22:52:28.065001 206763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 22:52:28.071571 206763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
I0701 22:52:28.084170 206763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 22:52:28.100695 206763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
I0701 22:52:28.116473 206763 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0701 22:52:28.120075 206763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:28.131247 206763 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065 for IP: 192.168.76.2
I0701 22:52:28.131357 206763 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 22:52:28.131420 206763 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 22:52:28.131544 206763 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.key
I0701 22:52:28.131565 206763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt with IP's: []
I0701 22:52:28.392404 206763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt ...
I0701 22:52:28.392435 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt: {Name:mk7dea7b0ab3ec583cb0306b64f0cf6ec3b27c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.392614 206763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.key ...
I0701 22:52:28.392631 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.key: {Name:mk7bf002acb651d3323f1ff2988b435557d13609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.392757 206763 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key.31bdca25
I0701 22:52:28.392782 206763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0701 22:52:28.516548 206763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt.31bdca25 ...
I0701 22:52:28.516579 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt.31bdca25: {Name:mk0075777ded78305e778a622904ba3619d84ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.516766 206763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key.31bdca25 ...
I0701 22:52:28.516783 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key.31bdca25: {Name:mkf55b2ec8d189490bc2bd1a4fa9cd3166c55bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.516887 206763 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt
I0701 22:52:28.516957 206763 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key
I0701 22:52:28.517024 206763 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.key
I0701 22:52:28.517046 206763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.crt with IP's: []
I0701 22:52:25.936453 208328 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20220701225213-10065:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (6.170418686s)
I0701 22:52:25.936488 208328 kic.go:188] duration metric: took 6.170592 seconds to extract preloaded images to volume
W0701 22:52:25.936668 208328 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0701 22:52:25.936792 208328 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0701 22:52:26.079166 208328 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20220701225213-10065 --name force-systemd-flag-20220701225213-10065 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20220701225213-10065 --network force-systemd-flag-20220701225213-10065 --ip 192.168.85.2 --volume force-systemd-flag-20220701225213-10065:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
I0701 22:52:26.607980 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Running}}
I0701 22:52:26.651804 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:26.689474 208328 cli_runner.go:164] Run: docker exec force-systemd-flag-20220701225213-10065 stat /var/lib/dpkg/alternatives/iptables
I0701 22:52:26.756186 208328 oci.go:144] the created container "force-systemd-flag-20220701225213-10065" has a running status.
I0701 22:52:26.756220 208328 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa...
I0701 22:52:27.162603 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0701 22:52:27.162666 208328 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0701 22:52:27.287000 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:27.327797 208328 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0701 22:52:27.327832 208328 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-20220701225213-10065 chown docker:docker /home/docker/.ssh/authorized_keys]
I0701 22:52:27.518455 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:27.562280 208328 machine.go:88] provisioning docker machine ...
I0701 22:52:27.562320 208328 ubuntu.go:169] provisioning hostname "force-systemd-flag-20220701225213-10065"
I0701 22:52:27.562383 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:27.602420 208328 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:27.602634 208328 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49336 <nil> <nil>}
I0701 22:52:27.602667 208328 main.go:134] libmachine: About to run SSH command:
sudo hostname force-systemd-flag-20220701225213-10065 && echo "force-systemd-flag-20220701225213-10065" | sudo tee /etc/hostname
I0701 22:52:27.741643 208328 main.go:134] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-20220701225213-10065
I0701 22:52:27.741726 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:27.786531 208328 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:27.786722 208328 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49336 <nil> <nil>}
I0701 22:52:27.786754 208328 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-flag-20220701225213-10065' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-20220701225213-10065/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-flag-20220701225213-10065' | sudo tee -a /etc/hosts;
fi
fi
I0701 22:52:27.915670 208328 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 22:52:27.915703 208328 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
I0701 22:52:27.915724 208328 ubuntu.go:177] setting up certificates
I0701 22:52:27.915734 208328 provision.go:83] configureAuth start
I0701 22:52:27.915785 208328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220701225213-10065
I0701 22:52:27.954739 208328 provision.go:138] copyHostCerts
I0701 22:52:27.954788 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 22:52:27.954823 208328 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
I0701 22:52:27.954837 208328 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 22:52:27.954903 208328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
I0701 22:52:27.954976 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 22:52:27.955003 208328 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
I0701 22:52:27.955014 208328 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 22:52:27.955053 208328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
I0701 22:52:27.955110 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 22:52:27.955137 208328 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
I0701 22:52:27.955148 208328 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 22:52:27.955188 208328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1675 bytes)
I0701 22:52:27.955246 208328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-20220701225213-10065 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-20220701225213-10065]
I0701 22:52:28.276008 208328 provision.go:172] copyRemoteCerts
I0701 22:52:28.276055 208328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 22:52:28.276083 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:28.308198 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:28.394550 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem -> /etc/docker/server.pem
I0701 22:52:28.394608 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
I0701 22:52:28.413086 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0701 22:52:28.413134 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 22:52:28.432317 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0701 22:52:28.432376 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 22:52:28.452195 208328 provision.go:86] duration metric: configureAuth took 536.45058ms
I0701 22:52:28.452222 208328 ubuntu.go:193] setting minikube options for container-runtime
I0701 22:52:28.452386 208328 config.go:178] Loaded profile config "force-systemd-flag-20220701225213-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:28.452444 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:28.486510 208328 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:28.486654 208328 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49336 <nil> <nil>}
I0701 22:52:28.486669 208328 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 22:52:28.607544 208328 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0701 22:52:28.607568 208328 ubuntu.go:71] root file system type: overlay
I0701 22:52:28.607740 208328 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 22:52:28.607797 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:26.122573 199091 ssh_runner.go:235] Completed: docker stop d21e2232a42b 58283dd133ae 1489f2e1da5f fcb211cabc39 6ae728ad062b 14a43d537860 3f4960b499e2 3931159eb84c 9ac586c32c8e bcdc199fee91 cbc14a38b672 c5cdf93ad692 6f599a0df297 12657b5aa4cd ca2542402cbc 536083a3c7c5 0c788e6c2db9 6412100f2fc4 58bd01c22f50 bee4a477ee64 3ae7d9bd5c89 a239cd7931e2 50576a043124: (1.364751288s)
I0701 22:52:26.122632 199091 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0701 22:52:26.231420 199091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:52:26.240726 199091 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jul 1 22:51 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Jul 1 22:51 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2043 Jul 1 22:51 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Jul 1 22:51 /etc/kubernetes/scheduler.conf
I0701 22:52:26.240792 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0701 22:52:26.249023 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0701 22:52:26.257214 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0701 22:52:26.265389 199091 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0701 22:52:26.265446 199091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0701 22:52:26.273093 199091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0701 22:52:26.281570 199091 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0701 22:52:26.281619 199091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0701 22:52:26.289436 199091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:52:26.297877 199091 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0701 22:52:26.297898 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:26.345202 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.467310 199091 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.12207146s)
I0701 22:52:27.467343 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.702053 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.771747 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:27.888423 199091 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:27.888480 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:28.399008 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:28.898757 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:29.399284 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:29.420605 199091 api_server.go:71] duration metric: took 1.532180215s to wait for apiserver process to appear ...
I0701 22:52:29.420637 199091 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:29.420652 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:29.420955 199091 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0701 22:52:28.642511 208328 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:28.647926 208328 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49336 <nil> <nil>}
I0701 22:52:28.648028 208328 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 22:52:28.795067 208328 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 22:52:28.795147 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:28.846905 208328 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:28.847082 208328 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49336 <nil> <nil>}
I0701 22:52:28.847108 208328 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 22:52:29.989640 208328 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-07-01 22:52:28.789464029 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0701 22:52:29.989685 208328 machine.go:91] provisioned docker machine in 2.427380767s
I0701 22:52:29.989697 208328 client.go:171] LocalClient.Create took 16.008408385s
I0701 22:52:29.989708 208328 start.go:173] duration metric: libmachine.API.Create for "force-systemd-flag-20220701225213-10065" took 16.008459143s
I0701 22:52:29.989719 208328 start.go:306] post-start starting for "force-systemd-flag-20220701225213-10065" (driver="docker")
I0701 22:52:29.989728 208328 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 22:52:29.989803 208328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 22:52:29.989843 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:30.038131 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:30.129251 208328 ssh_runner.go:195] Run: cat /etc/os-release
I0701 22:52:30.132837 208328 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 22:52:30.132868 208328 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 22:52:30.132884 208328 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 22:52:30.132892 208328 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0701 22:52:30.132905 208328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
I0701 22:52:30.132967 208328 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
I0701 22:52:30.133076 208328 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem -> 100652.pem in /etc/ssl/certs
I0701 22:52:30.133094 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem -> /etc/ssl/certs/100652.pem
I0701 22:52:30.133187 208328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 22:52:30.140521 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:30.160757 208328 start.go:309] post-start completed in 171.023768ms
I0701 22:52:30.161206 208328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220701225213-10065
I0701 22:52:30.203702 208328 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/config.json ...
I0701 22:52:30.204025 208328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0701 22:52:30.204085 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:30.238754 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:30.328808 208328 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0701 22:52:30.336596 208328 start.go:134] duration metric: createHost completed in 16.357834324s
I0701 22:52:30.336624 208328 start.go:81] releasing machines lock for "force-systemd-flag-20220701225213-10065", held for 16.357995709s
I0701 22:52:30.336718 208328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20220701225213-10065
I0701 22:52:30.377218 208328 ssh_runner.go:195] Run: systemctl --version
I0701 22:52:30.377277 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:30.377296 208328 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 22:52:30.377362 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:30.423506 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:30.436257 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:30.511956 208328 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 22:52:30.539664 208328 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0701 22:52:30.539728 208328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 22:52:30.549886 208328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 22:52:30.567462 208328 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 22:52:30.684439 208328 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 22:52:30.774458 208328 docker.go:502] Forcing docker to use systemd as cgroup manager...
I0701 22:52:30.774492 208328 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
I0701 22:52:30.790221 208328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:30.881765 208328 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 22:52:31.402117 208328 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0701 22:52:31.512694 208328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:31.612824 208328 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0701 22:52:31.625187 208328 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0701 22:52:31.625248 208328 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0701 22:52:31.628566 208328 start.go:471] Will wait 60s for crictl version
I0701 22:52:31.628619 208328 ssh_runner.go:195] Run: sudo crictl version
I0701 22:52:31.754241 208328 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0701 22:52:31.754308 208328 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:31.801205 208328 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 22:52:31.863774 208328 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0701 22:52:31.863863 208328 cli_runner.go:164] Run: docker network inspect force-systemd-flag-20220701225213-10065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:31.902217 208328 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0701 22:52:31.905668 208328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:31.916350 208328 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:31.916417 208328 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:31.954333 208328 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:31.954363 208328 docker.go:533] Images already preloaded, skipping extraction
I0701 22:52:31.954425 208328 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 22:52:31.989563 208328 docker.go:602] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0701 22:52:31.989589 208328 cache_images.go:84] Images are preloaded, skipping loading
I0701 22:52:31.989634 208328 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 22:52:32.085793 208328 cni.go:95] Creating CNI manager for ""
I0701 22:52:32.085818 208328 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:32.085827 208328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 22:52:32.085844 208328 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-20220701225213-10065 NodeName:force-systemd-flag-20220701225213-10065 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.85.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 22:52:32.086009 208328 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "force-systemd-flag-20220701225213-10065"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 22:52:32.086106 208328 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=force-systemd-flag-20220701225213-10065 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:force-systemd-flag-20220701225213-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 22:52:32.086162 208328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0701 22:52:32.094561 208328 binaries.go:44] Found k8s binaries, skipping transfer
I0701 22:52:32.094632 208328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 22:52:32.102282 208328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
I0701 22:52:32.116509 208328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 22:52:32.130728 208328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
I0701 22:52:32.146742 208328 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0701 22:52:32.150998 208328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:32.165564 208328 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065 for IP: 192.168.85.2
I0701 22:52:32.165682 208328 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 22:52:32.165833 208328 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 22:52:32.165968 208328 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.key
I0701 22:52:32.165995 208328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt with IP's: []
I0701 22:52:32.311663 208328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt ...
I0701 22:52:32.311704 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt: {Name:mk3d21dc62978353237d66f69f36dd82b27b70da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.311960 208328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.key ...
I0701 22:52:32.312002 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.key: {Name:mk31ca5bbf01c7b2c8fb7d6aa8afa47de65833f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.312157 208328 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key.43b9df8c
I0701 22:52:32.312184 208328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0701 22:52:32.572690 208328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt.43b9df8c ...
I0701 22:52:32.572726 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt.43b9df8c: {Name:mkfda37fd12df5c76fbbe91fcbaba689def90db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.572954 208328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key.43b9df8c ...
I0701 22:52:32.572973 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key.43b9df8c: {Name:mk6f2fc2f74e930a0bb9c8fb7843f8e0367c57e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.573088 208328 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt
I0701 22:52:32.573163 208328 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key
I0701 22:52:32.573223 208328 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.key
I0701 22:52:32.573239 208328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.crt with IP's: []
I0701 22:52:32.787275 208328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.crt ...
I0701 22:52:32.787311 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.crt: {Name:mkd864acb9e9ab31960ea7e6d67f282e8761c84c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.787548 208328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.key ...
I0701 22:52:32.787575 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.key: {Name:mk438c861f4570d4f36efbe6d52b5515c7ca7c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:32.787712 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0701 22:52:32.787744 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0701 22:52:32.787765 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0701 22:52:32.787782 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0701 22:52:32.787798 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0701 22:52:32.787811 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0701 22:52:32.787828 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0701 22:52:32.787839 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0701 22:52:32.787892 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem (1338 bytes)
W0701 22:52:32.787939 208328 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065_empty.pem, impossibly tiny 0 bytes
I0701 22:52:32.787959 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 22:52:32.787999 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 22:52:32.788041 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 22:52:32.788079 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1675 bytes)
I0701 22:52:32.788145 208328 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:32.788189 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:32.788213 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem -> /usr/share/ca-certificates/10065.pem
I0701 22:52:32.788233 208328 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem -> /usr/share/ca-certificates/100652.pem
I0701 22:52:32.788977 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 22:52:32.808994 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 22:52:32.826973 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 22:52:32.844601 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0701 22:52:32.863931 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 22:52:32.881658 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 22:52:32.903962 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 22:52:32.924745 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 22:52:32.947500 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 22:52:32.965958 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem --> /usr/share/ca-certificates/10065.pem (1338 bytes)
I0701 22:52:32.986815 208328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /usr/share/ca-certificates/100652.pem (1708 bytes)
I0701 22:52:33.005717 208328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 22:52:33.019711 208328 ssh_runner.go:195] Run: openssl version
I0701 22:52:33.025400 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 22:52:33.034029 208328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:33.038078 208328 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:33.038139 208328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:33.043344 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 22:52:33.066493 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10065.pem && ln -fs /usr/share/ca-certificates/10065.pem /etc/ssl/certs/10065.pem"
I0701 22:52:33.074620 208328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10065.pem
I0701 22:52:33.077755 208328 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10065.pem
I0701 22:52:33.077817 208328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10065.pem
I0701 22:52:33.082665 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10065.pem /etc/ssl/certs/51391683.0"
I0701 22:52:33.090307 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100652.pem && ln -fs /usr/share/ca-certificates/100652.pem /etc/ssl/certs/100652.pem"
I0701 22:52:33.097462 208328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100652.pem
I0701 22:52:33.100684 208328 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100652.pem
I0701 22:52:33.100734 208328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100652.pem
I0701 22:52:33.105410 208328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100652.pem /etc/ssl/certs/3ec20f2e.0"
I0701 22:52:33.113380 208328 kubeadm.go:395] StartCluster: {Name:force-systemd-flag-20220701225213-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:force-systemd-flag-20220701225213-10065 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath:}
I0701 22:52:33.113512 208328 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:33.150298 208328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 22:52:33.159076 208328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:52:33.168110 208328 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0701 22:52:33.168174 208328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:52:33.177763 208328 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 22:52:33.177813 208328 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 22:52:28.688239 206763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.crt ...
I0701 22:52:28.688263 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.crt: {Name:mk0a542937934ba5a6bb8b0ac8473e888d13d2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.688442 206763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.key ...
I0701 22:52:28.688459 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.key: {Name:mkb03fec3df7263d2778550114762ad53d809262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:28.688669 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem (1338 bytes)
W0701 22:52:28.688720 206763 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065_empty.pem, impossibly tiny 0 bytes
I0701 22:52:28.688738 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 22:52:28.688780 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 22:52:28.688807 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 22:52:28.688837 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1675 bytes)
I0701 22:52:28.688876 206763 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem (1708 bytes)
I0701 22:52:28.689527 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 22:52:28.711582 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0701 22:52:28.736721 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 22:52:28.787136 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0701 22:52:28.812620 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 22:52:28.842304 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 22:52:28.859673 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 22:52:28.878342 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 22:52:28.900859 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100652.pem --> /usr/share/ca-certificates/100652.pem (1708 bytes)
I0701 22:52:28.948022 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 22:52:28.968544 206763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10065.pem --> /usr/share/ca-certificates/10065.pem (1338 bytes)
I0701 22:52:28.988352 206763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 22:52:29.006100 206763 ssh_runner.go:195] Run: openssl version
I0701 22:52:29.012859 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100652.pem && ln -fs /usr/share/ca-certificates/100652.pem /etc/ssl/certs/100652.pem"
I0701 22:52:29.025564 206763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100652.pem
I0701 22:52:29.029487 206763 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100652.pem
I0701 22:52:29.029575 206763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100652.pem
I0701 22:52:29.036688 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100652.pem /etc/ssl/certs/3ec20f2e.0"
I0701 22:52:29.046224 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 22:52:29.053562 206763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:29.057043 206763 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:29.057102 206763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:29.061902 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 22:52:29.069192 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10065.pem && ln -fs /usr/share/ca-certificates/10065.pem /etc/ssl/certs/10065.pem"
I0701 22:52:29.077245 206763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10065.pem
I0701 22:52:29.080388 206763 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10065.pem
I0701 22:52:29.080427 206763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10065.pem
I0701 22:52:29.086413 206763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10065.pem /etc/ssl/certs/51391683.0"
I0701 22:52:29.098563 206763 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220701225208-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220701225208-10065 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath:}
I0701 22:52:29.098703 206763 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 22:52:29.135075 206763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 22:52:29.145622 206763 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:52:29.158054 206763 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0701 22:52:29.158112 206763 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:52:29.166113 206763 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 22:52:29.166151 206763 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 22:52:33.502343 208328 out.go:204] - Generating certificates and keys ...
I0701 22:52:29.921511 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:33.603769 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0701 22:52:33.603807 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0701 22:52:33.922213 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:33.931303 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 22:52:33.931336 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 22:52:34.421974 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:34.430988 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 22:52:34.431021 199091 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 22:52:37.135567 208328 out.go:204] - Booting up control plane ...
I0701 22:52:34.921923 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:35.019944 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0701 22:52:35.029329 199091 api_server.go:140] control plane version: v1.24.2
I0701 22:52:35.029355 199091 api_server.go:130] duration metric: took 5.608711067s to wait for apiserver health ...
I0701 22:52:35.029365 199091 cni.go:95] Creating CNI manager for ""
I0701 22:52:35.029374 199091 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:35.029383 199091 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:35.235976 199091 system_pods.go:59] 6 kube-system pods found
I0701 22:52:35.236014 199091 system_pods.go:61] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:35.236027 199091 system_pods.go:61] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0701 22:52:35.236036 199091 system_pods.go:61] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:35.236050 199091 system_pods.go:61] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0701 22:52:35.236060 199091 system_pods.go:61] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:35.236070 199091 system_pods.go:61] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0701 22:52:35.236078 199091 system_pods.go:74] duration metric: took 206.689349ms to wait for pod list to return data ...
I0701 22:52:35.236089 199091 node_conditions.go:102] verifying NodePressure condition ...
I0701 22:52:35.324763 199091 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 22:52:35.324798 199091 node_conditions.go:123] node cpu capacity is 8
I0701 22:52:35.324812 199091 node_conditions.go:105] duration metric: took 88.717509ms to run NodePressure ...
I0701 22:52:35.324836 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:36.623041 199091 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.298185003s)
I0701 22:52:36.623078 199091 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0701 22:52:36.628299 199091 kubeadm.go:777] kubelet initialised
I0701 22:52:36.628326 199091 kubeadm.go:778] duration metric: took 5.235635ms waiting for restarted kubelet to initialise ...
I0701 22:52:36.628334 199091 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:36.634066 199091 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:38.647755 199091 pod_ready.go:102] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:41.146909 199091 pod_ready.go:102] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:42.644603 199091 pod_ready.go:92] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:42.644631 199091 pod_ready.go:81] duration metric: took 6.010537303s waiting for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:42.644641 199091 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:44.653983 199091 pod_ready.go:102] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:47.030778 206763 out.go:204] - Generating certificates and keys ...
I0701 22:52:47.033608 206763 out.go:204] - Booting up control plane ...
I0701 22:52:47.036090 206763 out.go:204] - Configuring RBAC rules ...
I0701 22:52:47.037726 206763 cni.go:95] Creating CNI manager for ""
I0701 22:52:47.037753 206763 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:47.037778 206763 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0701 22:52:47.037932 206763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0701 22:52:47.038038 206763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=kubernetes-upgrade-20220701225208-10065 minikube.k8s.io/updated_at=2022_07_01T22_52_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0701 22:52:47.298191 206763 kubeadm.go:1045] duration metric: took 260.299749ms to wait for elevateKubeSystemPrivileges.
I0701 22:52:47.298280 206763 ops.go:34] apiserver oom_adj: -16
I0701 22:52:47.309228 206763 kubeadm.go:397] StartCluster complete in 18.210667932s
I0701 22:52:47.309263 206763 settings.go:142] acquiring lock: {Name:mk46f1228f0a7b30ad1ce5ce48145fbdcfa93542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:47.309372 206763 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:47.310547 206763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk40c1a74a65307876af762788c72bf321eefc27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:47.311611 206763 kapi.go:59] client config for kubernetes-upgrade-20220701225208-10065: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/kubernetes-upgrade-20220701225208-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:47.825242 206763 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220701225208-10065" rescaled to 1
I0701 22:52:47.825295 206763 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:47.827044 206763 out.go:177] * Verifying Kubernetes components...
I0701 22:52:47.825364 206763 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0701 22:52:47.825388 206763 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0701 22:52:47.825556 206763 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225208-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0701 22:52:47.828533 206763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:47.828570 206763 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220701225208-10065"
I0701 22:52:47.828596 206763 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220701225208-10065"
I0701 22:52:47.828568 206763 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220701225208-10065"
I0701 22:52:47.828681 206763 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220701225208-10065"
W0701 22:52:47.828697 206763 addons.go:162] addon storage-provisioner should already be in state true
I0701 22:52:47.828748 206763 host.go:66] Checking if "kubernetes-upgrade-20220701225208-10065" exists ...
I0701 22:52:47.828981 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:47.829270 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:47.879602 206763 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:47.878111 206763 kapi.go:59] client config for kubernetes-upgrade-20220701225208-10065: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/kubernetes-upgrade-20220701225208-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:47.881404 206763 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:47.881421 206763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0701 22:52:47.881471 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:47.884480 206763 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220701225208-10065"
W0701 22:52:47.884509 206763 addons.go:162] addon default-storageclass should already be in state true
I0701 22:52:47.884537 206763 host.go:66] Checking if "kubernetes-upgrade-20220701225208-10065" exists ...
I0701 22:52:47.885042 206763 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225208-10065 --format={{.State.Status}}
I0701 22:52:47.926336 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:47.933198 206763 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0701 22:52:47.933952 206763 kapi.go:59] client config for kubernetes-upgrade-20220701225208-10065: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225208-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/kubernetes-upgrade-20220701225208-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:47.934281 206763 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:47.934354 206763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:47.946439 206763 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:47.946466 206763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0701 22:52:47.946534 206763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225208-10065
I0701 22:52:47.984184 206763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49331 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225208-10065/id_rsa Username:docker}
I0701 22:52:48.101025 206763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:48.103318 206763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:48.313092 206763 start.go:809] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
I0701 22:52:48.313168 206763 api_server.go:71] duration metric: took 487.852779ms to wait for apiserver process to appear ...
I0701 22:52:48.313251 206763 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:48.313272 206763 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0701 22:52:48.319040 206763 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
ok
I0701 22:52:48.320009 206763 api_server.go:140] control plane version: v1.16.0
I0701 22:52:48.320031 206763 api_server.go:130] duration metric: took 6.765002ms to wait for apiserver health ...
I0701 22:52:48.320040 206763 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:48.324204 206763 system_pods.go:59] 0 kube-system pods found
I0701 22:52:48.324236 206763 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0701 22:52:48.526108 206763 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0701 22:52:48.181914 208328 out.go:204] - Configuring RBAC rules ...
I0701 22:52:48.597591 208328 cni.go:95] Creating CNI manager for ""
I0701 22:52:48.597619 208328 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:48.597650 208328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0701 22:52:48.597791 208328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0701 22:52:48.597904 208328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04 minikube.k8s.io/name=force-systemd-flag-20220701225213-10065 minikube.k8s.io/updated_at=2022_07_01T22_52_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0701 22:52:48.607405 208328 ops.go:34] apiserver oom_adj: -16
I0701 22:52:46.655609 199091 pod_ready.go:102] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"False"
I0701 22:52:49.154945 199091 pod_ready.go:92] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.154979 199091 pod_ready.go:81] duration metric: took 6.510331143s waiting for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.154993 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.158912 199091 pod_ready.go:92] pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.158932 199091 pod_ready.go:81] duration metric: took 3.929952ms waiting for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.158944 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.162788 199091 pod_ready.go:92] pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.162806 199091 pod_ready.go:81] duration metric: took 3.854918ms waiting for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.162814 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.166657 199091 pod_ready.go:92] pod "kube-proxy-2rj2j" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.166675 199091 pod_ready.go:81] duration metric: took 3.856564ms waiting for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.166682 199091 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.170222 199091 pod_ready.go:92] pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.170238 199091 pod_ready.go:81] duration metric: took 3.550181ms waiting for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.170244 199091 pod_ready.go:38] duration metric: took 12.541901866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:49.170257 199091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0701 22:52:49.177370 199091 ops.go:34] apiserver oom_adj: -16
I0701 22:52:49.177396 199091 kubeadm.go:630] restartCluster took 27.495733284s
I0701 22:52:49.177403 199091 kubeadm.go:397] StartCluster complete in 27.542694024s
I0701 22:52:49.177417 199091 settings.go:142] acquiring lock: {Name:mk46f1228f0a7b30ad1ce5ce48145fbdcfa93542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.177504 199091 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:49.178553 199091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk40c1a74a65307876af762788c72bf321eefc27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.179541 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.181748 199091 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220701225037-10065" rescaled to 1
I0701 22:52:49.181800 199091 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:49.184498 199091 out.go:177] * Verifying Kubernetes components...
I0701 22:52:49.181825 199091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0701 22:52:49.181876 199091 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0701 22:52:49.182002 199091 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:49.185869 199091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:49.185995 199091 addons.go:65] Setting storage-provisioner=true in profile "pause-20220701225037-10065"
I0701 22:52:49.186027 199091 addons.go:153] Setting addon storage-provisioner=true in "pause-20220701225037-10065"
W0701 22:52:49.186035 199091 addons.go:162] addon storage-provisioner should already be in state true
I0701 22:52:49.186082 199091 host.go:66] Checking if "pause-20220701225037-10065" exists ...
I0701 22:52:49.186312 199091 addons.go:65] Setting default-storageclass=true in profile "pause-20220701225037-10065"
I0701 22:52:49.186335 199091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220701225037-10065"
I0701 22:52:49.186575 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.186592 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.229553 199091 kapi.go:59] client config for pause-20220701225037-10065: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/pause-20220701225037-10
065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.232766 199091 addons.go:153] Setting addon default-storageclass=true in "pause-20220701225037-10065"
W0701 22:52:49.232797 199091 addons.go:162] addon default-storageclass should already be in state true
I0701 22:52:49.232832 199091 host.go:66] Checking if "pause-20220701225037-10065" exists ...
I0701 22:52:49.233350 199091 cli_runner.go:164] Run: docker container inspect pause-20220701225037-10065 --format={{.State.Status}}
I0701 22:52:49.238093 199091 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:48.810741 208328 kubeadm.go:1045] duration metric: took 212.989007ms to wait for elevateKubeSystemPrivileges.
I0701 22:52:49.043336 208328 kubeadm.go:397] StartCluster complete in 15.929959319s
I0701 22:52:49.043372 208328 settings.go:142] acquiring lock: {Name:mk46f1228f0a7b30ad1ce5ce48145fbdcfa93542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.043519 208328 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:49.044926 208328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk40c1a74a65307876af762788c72bf321eefc27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:49.046217 208328 kapi.go:59] client config for force-systemd-flag-20220701225213-10065: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/force-systemd-flag-20220701225213-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.046593 208328 cert_rotation.go:137] Starting client certificate rotation controller
I0701 22:52:49.560701 208328 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "force-systemd-flag-20220701225213-10065" rescaled to 1
I0701 22:52:49.560760 208328 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:49.562990 208328 out.go:177] * Verifying Kubernetes components...
I0701 22:52:49.560823 208328 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0701 22:52:49.560828 208328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0701 22:52:49.561015 208328 config.go:178] Loaded profile config "force-systemd-flag-20220701225213-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:49.563121 208328 addons.go:65] Setting storage-provisioner=true in profile "force-systemd-flag-20220701225213-10065"
I0701 22:52:49.564608 208328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:49.564639 208328 addons.go:153] Setting addon storage-provisioner=true in "force-systemd-flag-20220701225213-10065"
W0701 22:52:49.564658 208328 addons.go:162] addon storage-provisioner should already be in state true
I0701 22:52:49.563159 208328 addons.go:65] Setting default-storageclass=true in profile "force-systemd-flag-20220701225213-10065"
I0701 22:52:49.564958 208328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-flag-20220701225213-10065"
I0701 22:52:49.565109 208328 host.go:66] Checking if "force-systemd-flag-20220701225213-10065" exists ...
I0701 22:52:49.565725 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:49.566010 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:49.614972 208328 kapi.go:59] client config for force-systemd-flag-20220701225213-10065: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/force-systemd-flag-20220701225213-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.618258 208328 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:49.239724 199091 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.239752 199091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0701 22:52:49.239819 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:52:49.258674 199091 node_ready.go:35] waiting up to 6m0s for node "pause-20220701225037-10065" to be "Ready" ...
I0701 22:52:49.258713 199091 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0701 22:52:49.275934 199091 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:49.275959 199091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0701 22:52:49.276022 199091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220701225037-10065
I0701 22:52:49.282775 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:52:49.314420 199091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49302 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/pause-20220701225037-10065/id_rsa Username:docker}
I0701 22:52:49.353393 199091 node_ready.go:49] node "pause-20220701225037-10065" has status "Ready":"True"
I0701 22:52:49.353421 199091 node_ready.go:38] duration metric: took 94.715118ms waiting for node "pause-20220701225037-10065" to be "Ready" ...
I0701 22:52:49.353431 199091 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:49.376793 199091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.411064 199091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:49.588118 199091 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.952992 199091 pod_ready.go:92] pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:49.953018 199091 pod_ready.go:81] duration metric: took 364.874291ms waiting for pod "coredns-6d4b75cb6d-9hr6m" in "kube-system" namespace to be "Ready" ...
I0701 22:52:49.953030 199091 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.353081 199091 pod_ready.go:92] pod "etcd-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:50.353117 199091 pod_ready.go:81] duration metric: took 400.078131ms waiting for pod "etcd-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.353138 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.442105 199091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065268006s)
I0701 22:52:50.442189 199091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.031095993s)
I0701 22:52:50.443946 199091 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0701 22:52:49.619789 208328 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.618005 208328 addons.go:153] Setting addon default-storageclass=true in "force-systemd-flag-20220701225213-10065"
W0701 22:52:49.619811 208328 addons.go:162] addon default-storageclass should already be in state true
I0701 22:52:49.619834 208328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0701 22:52:49.619920 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:49.619839 208328 host.go:66] Checking if "force-systemd-flag-20220701225213-10065" exists ...
I0701 22:52:49.620448 208328 cli_runner.go:164] Run: docker container inspect force-systemd-flag-20220701225213-10065 --format={{.State.Status}}
I0701 22:52:49.657589 208328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0701 22:52:49.658793 208328 kapi.go:59] client config for force-systemd-flag-20220701225213-10065: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-flag-20220701225213-10065/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profil
es/force-systemd-flag-20220701225213-10065/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:49.659043 208328 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:49.659075 208328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:49.665459 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:49.671635 208328 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:49.671661 208328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0701 22:52:49.671711 208328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20220701225213-10065
I0701 22:52:49.726019 208328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49336 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/force-systemd-flag-20220701225213-10065/id_rsa Username:docker}
I0701 22:52:49.791490 208328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0701 22:52:49.899223 208328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0701 22:52:50.619799 208328 start.go:809] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
I0701 22:52:50.619887 208328 api_server.go:71] duration metric: took 1.059085698s to wait for apiserver process to appear ...
I0701 22:52:50.619911 208328 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:50.619928 208328 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0701 22:52:50.624730 208328 api_server.go:266] https://192.168.85.2:8443/healthz returned 200:
ok
I0701 22:52:50.625701 208328 api_server.go:140] control plane version: v1.24.2
I0701 22:52:50.625723 208328 api_server.go:130] duration metric: took 5.802308ms to wait for apiserver health ...
I0701 22:52:50.625736 208328 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:50.632478 208328 system_pods.go:59] 4 kube-system pods found
I0701 22:52:50.632508 208328 system_pods.go:61] "etcd-force-systemd-flag-20220701225213-10065" [7dfa850f-3292-42a6-b159-70b8a024c860] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0701 22:52:50.632519 208328 system_pods.go:61] "kube-apiserver-force-systemd-flag-20220701225213-10065" [eb4760ab-68b0-42d1-baa9-8ddd6c6c48bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0701 22:52:50.632527 208328 system_pods.go:61] "kube-controller-manager-force-systemd-flag-20220701225213-10065" [0b64b4c3-1e9c-4d3c-8802-e289cb9c01db] Pending
I0701 22:52:50.632532 208328 system_pods.go:61] "kube-scheduler-force-systemd-flag-20220701225213-10065" [b19223ab-4fc9-45cd-845f-bd2425826f63] Pending
I0701 22:52:50.632538 208328 system_pods.go:74] duration metric: took 6.796879ms to wait for pod list to return data ...
I0701 22:52:50.632550 208328 kubeadm.go:572] duration metric: took 1.071753217s to wait for : map[apiserver:true system_pods:true] ...
I0701 22:52:50.632565 208328 node_conditions.go:102] verifying NodePressure condition ...
I0701 22:52:50.684636 208328 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 22:52:50.684673 208328 node_conditions.go:123] node cpu capacity is 8
I0701 22:52:50.684688 208328 node_conditions.go:105] duration metric: took 52.11771ms to run NodePressure ...
I0701 22:52:50.684700 208328 start.go:216] waiting for startup goroutines ...
I0701 22:52:50.722689 208328 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0701 22:52:50.724026 208328 addons.go:414] enableAddons completed in 1.16320983s
I0701 22:52:50.766744 208328 start.go:506] kubectl: 1.24.2, cluster: 1.24.2 (minor skew: 0)
I0701 22:52:50.768923 208328 out.go:177] * Done! kubectl is now configured to use "force-systemd-flag-20220701225213-10065" cluster and "default" namespace by default
I0701 22:52:50.445295 199091 addons.go:414] enableAddons completed in 1.263448076s
I0701 22:52:50.752295 199091 pod_ready.go:92] pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:50.752319 199091 pod_ready.go:81] duration metric: took 399.166858ms waiting for pod "kube-apiserver-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:50.752332 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.153325 199091 pod_ready.go:92] pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.153350 199091 pod_ready.go:81] duration metric: took 401.010379ms waiting for pod "kube-controller-manager-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.153363 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.555026 199091 pod_ready.go:92] pod "kube-proxy-2rj2j" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.555054 199091 pod_ready.go:81] duration metric: took 401.682852ms waiting for pod "kube-proxy-2rj2j" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.555067 199091 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.952530 199091 pod_ready.go:92] pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace has status "Ready":"True"
I0701 22:52:51.952551 199091 pod_ready.go:81] duration metric: took 397.476742ms waiting for pod "kube-scheduler-pause-20220701225037-10065" in "kube-system" namespace to be "Ready" ...
I0701 22:52:51.952568 199091 pod_ready.go:38] duration metric: took 2.599125631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 22:52:51.952588 199091 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:51.952624 199091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:51.962068 199091 api_server.go:71] duration metric: took 2.780244206s to wait for apiserver process to appear ...
I0701 22:52:51.962095 199091 api_server.go:87] waiting for apiserver healthz status ...
I0701 22:52:51.962107 199091 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 22:52:51.966185 199091 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0701 22:52:51.966897 199091 api_server.go:140] control plane version: v1.24.2
I0701 22:52:51.966915 199091 api_server.go:130] duration metric: took 4.814015ms to wait for apiserver health ...
I0701 22:52:51.966922 199091 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 22:52:52.155235 199091 system_pods.go:59] 7 kube-system pods found
I0701 22:52:52.155266 199091 system_pods.go:61] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:52.155272 199091 system_pods.go:61] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running
I0701 22:52:52.155278 199091 system_pods.go:61] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:52.155285 199091 system_pods.go:61] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running
I0701 22:52:52.155291 199091 system_pods.go:61] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:52.155297 199091 system_pods.go:61] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running
I0701 22:52:52.155305 199091 system_pods.go:61] "storage-provisioner" [54985022-a6cd-4c59-af65-805d97e94819] Running
I0701 22:52:52.155312 199091 system_pods.go:74] duration metric: took 188.385275ms to wait for pod list to return data ...
I0701 22:52:52.155326 199091 default_sa.go:34] waiting for default service account to be created ...
I0701 22:52:52.353585 199091 default_sa.go:45] found service account: "default"
I0701 22:52:52.353608 199091 default_sa.go:55] duration metric: took 198.272792ms for default service account to be created ...
I0701 22:52:52.353617 199091 system_pods.go:116] waiting for k8s-apps to be running ...
I0701 22:52:52.554929 199091 system_pods.go:86] 7 kube-system pods found
I0701 22:52:52.554961 199091 system_pods.go:89] "coredns-6d4b75cb6d-9hr6m" [213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4] Running
I0701 22:52:52.554969 199091 system_pods.go:89] "etcd-pause-20220701225037-10065" [66bc4828-ae63-4e73-bb55-23be63fe6bfe] Running
I0701 22:52:52.554975 199091 system_pods.go:89] "kube-apiserver-pause-20220701225037-10065" [f4620885-8ff4-45e8-994f-32d0cdcc6a59] Running
I0701 22:52:52.554981 199091 system_pods.go:89] "kube-controller-manager-pause-20220701225037-10065" [a9b051f4-3ef2-4f1c-9530-1a7c43f8a755] Running
I0701 22:52:52.554986 199091 system_pods.go:89] "kube-proxy-2rj2j" [4427a6a7-009f-4357-8c8a-fedbba15c52e] Running
I0701 22:52:52.554993 199091 system_pods.go:89] "kube-scheduler-pause-20220701225037-10065" [5d0f25e0-6c06-4b94-9051-dba19aee73a6] Running
I0701 22:52:52.555000 199091 system_pods.go:89] "storage-provisioner" [54985022-a6cd-4c59-af65-805d97e94819] Running
I0701 22:52:52.555009 199091 system_pods.go:126] duration metric: took 201.38641ms to wait for k8s-apps to be running ...
I0701 22:52:52.555023 199091 system_svc.go:44] waiting for kubelet service to be running ....
I0701 22:52:52.555071 199091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:52:52.564406 199091 system_svc.go:56] duration metric: took 9.380785ms WaitForService to wait for kubelet.
I0701 22:52:52.564428 199091 kubeadm.go:572] duration metric: took 3.38260708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0701 22:52:52.564447 199091 node_conditions.go:102] verifying NodePressure condition ...
I0701 22:52:52.752008 199091 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 22:52:52.752029 199091 node_conditions.go:123] node cpu capacity is 8
I0701 22:52:52.752039 199091 node_conditions.go:105] duration metric: took 187.588064ms to run NodePressure ...
I0701 22:52:52.752050 199091 start.go:216] waiting for startup goroutines ...
I0701 22:52:52.791381 199091 start.go:506] kubectl: 1.24.2, cluster: 1.24.2 (minor skew: 0)
I0701 22:52:52.793212 199091 out.go:177] * Done! kubectl is now configured to use "pause-20220701225037-10065" cluster and "default" namespace by default
I0701 22:52:48.527777 206763 addons.go:414] enableAddons completed in 702.396219ms
I0701 22:52:48.590200 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:48.590240 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:48.590256 206763 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
I0701 22:52:48.974882 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:48.974909 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:48.974919 206763 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0701 22:52:49.401331 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:49.401388 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:49.401404 206763 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0701 22:52:49.877476 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:49.877508 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:49.877523 206763 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0701 22:52:50.467611 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:50.467637 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:50.467648 206763 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0701 22:52:51.305424 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:51.305462 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:51.305478 206763 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0701 22:52:52.054811 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:52.054846 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:52.054861 206763 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0701 22:52:53.045670 206763 system_pods.go:59] 1 kube-system pods found
I0701 22:52:53.045700 206763 system_pods.go:61] "storage-provisioner" [0a02c399-d0a8-496e-85fc-8a89a7ebd833] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0701 22:52:53.045713 206763 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
*
* ==> Docker <==
* -- Logs begin at Fri 2022-07-01 22:51:03 UTC, end at Fri 2022-07-01 22:52:54 UTC. --
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.436807852Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 601cfcfd8dc963a50ae23399869791feac0e98819f1a13a2301a351d0489cfbb 513f8c5aa07a106e4472168e93ec97cf702e15f6ff9fe8631d6e192d22d8d0dd], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.620907136Z" level=info msg="Removing stale sandbox 30d2f41f10c2965a9f8311617b688f9f369d6bcd7a9a4dc0c5c0bc1fa85ffa76 (12657b5aa4cd0cf9943e5bf389c91bc43e58c3d8658b7507ac9f1592418144b6)"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.622776511Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b2bba9bb4e19c1ae40c3535ae411107621843192a02b6ed2325d84c4d326142d edc5062a01e1b18d3cbd9a523749df608c075bd6a75d0a4408044dec65a1b26f], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.800936200Z" level=info msg="Removing stale sandbox 74b348bd57b6165ab4e8c030924c6f0333b61af9afddeb1f2e120fe152838ccc (bcdc199fee91b0769e4432e861b58c794cba9ccd7abfca3700fc250500081fbf)"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.802857009Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b2bba9bb4e19c1ae40c3535ae411107621843192a02b6ed2325d84c4d326142d 9309beca9771ba42e738bba3079d7319ada816e2a39ceacf292dc690041e581f], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.853658512Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.922143618Z" level=info msg="Loading containers: done."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.951379980Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.951507271Z" level=info msg="Daemon has completed initialization"
Jul 01 22:52:19 pause-20220701225037-10065 systemd[1]: Started Docker Application Container Engine.
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.985269185Z" level=info msg="API listen on [::]:2376"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.989782419Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 22:52:20 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:20.203475255Z" level=error msg="Failed to compute size of container rootfs 93fd92f11bbf0488a7b7410f273560237f99f2ab49e5f6dd554334def47ced33: mount does not exist"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.038995340Z" level=info msg="ignoring event" container=14a43d53786097a882d293a59018789aea5c352662b761659de462d52f9ec4c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.040180861Z" level=info msg="ignoring event" container=6ae728ad062b9982acb1ce89649399fcb89ef3d9bbef8b2d130bab636b6d5175 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.082999327Z" level=info msg="ignoring event" container=58283dd133ae642b405ee3c7aeaa7c6db15709b0fb84eb80b0bf56b1260ddbfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.083030975Z" level=info msg="ignoring event" container=1489f2e1da5fa5d9faed742a3153289625b23203233bb4f1b1d668862dbef857 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.084007935Z" level=info msg="ignoring event" container=fcb211cabc39a2f6cb16c9f7023cbc86dd153a3d8e72e4251ee5ded8f6f7e88b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.101595019Z" level=info msg="ignoring event" container=d21e2232a42b9cccd8b0dc9bccb3c1a4ec85d8b65e22d14d3ffe1a63861e7933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.947364822Z" level=error msg="27d94ed4dff426c6a071de822f0be84b04cd233d1d857c244036a485ab459ed3 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.948395327Z" level=error msg="3bdb4eff83b2a1bba0917dc8286d21eefbf084fe40433829315797631d0abc61 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.948555303Z" level=error msg="df4b3f4ae16edee7a124d491027b694991bfcee9919abafc85fc4a5ebbe07d46 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.950094192Z" level=error msg="a18b374894e46673d1a54815b05cff205760f02f7e2d3e92e402daf36b9aa0cb cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.950207135Z" level=error msg="b3907295cfb5852b50dc450d0790f8cbbb47f7ea564986012bc2b042314de154 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:26 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:26.152236859Z" level=error msg="7706b293b404cdc30fe1b6f1ae71369c4e88288ffac286c0e7ecf65d46849390 cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
bc8eb3b85df95 6e38f40d628db 4 seconds ago Running storage-provisioner 0 d316c5bbd2020
7b4f3ab61f2ac a4ca41631cc7a 17 seconds ago Running coredns 3 3b9491065a997
8c94216472f33 a634548d10b03 18 seconds ago Running kube-proxy 3 196d6896c1e45
7c466a26df783 aebe758cef4cd 26 seconds ago Running etcd 3 d9afa4105dbb8
6ef1f459e2361 d3377ffb7177c 26 seconds ago Running kube-apiserver 2 33f21d943ba1c
49c9854851e7e 34cdf99b1bb3b 26 seconds ago Running kube-controller-manager 3 d836c4bd13bbb
0bd8d7e9873d2 5d725196c1f47 26 seconds ago Running kube-scheduler 2 e1bf6726a1549
7706b293b404c a4ca41631cc7a 33 seconds ago Created coredns 2 fcb211cabc39a
b3907295cfb58 aebe758cef4cd 33 seconds ago Created etcd 2 d21e2232a42b9
27d94ed4dff42 5d725196c1f47 33 seconds ago Created kube-scheduler 1 6ae728ad062b9
3bdb4eff83b2a d3377ffb7177c 33 seconds ago Created kube-apiserver 1 14a43d5378609
df4b3f4ae16ed 34cdf99b1bb3b 33 seconds ago Created kube-controller-manager 2 58283dd133ae6
a18b374894e46 a634548d10b03 34 seconds ago Created kube-proxy 2 1489f2e1da5fa
*
* ==> coredns [7706b293b404] <==
*
*
* ==> coredns [7b4f3ab61f2a] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: pause-20220701225037-10065
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220701225037-10065
kubernetes.io/os=linux
minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
minikube.k8s.io/name=pause-20220701225037-10065
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_01T22_51_29_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 01 Jul 2022 22:51:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220701225037-10065
AcquireTime: <unset>
RenewTime: Fri, 01 Jul 2022 22:52:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:39 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-20220701225037-10065
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
System Info:
Machine ID: bbe1e1cef6e940328962dca52b3c5731
System UUID: 6a2b12fa-900c-459e-8bef-f54d21d18140
Boot ID: b4d8e8fa-97b6-4834-a1fb-bac1d7a9adea
Kernel Version: 5.15.0-1012-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-9hr6m 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 72s
kube-system etcd-pause-20220701225037-10065 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 85s
kube-system kube-apiserver-pause-20220701225037-10065 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 85s
kube-system kube-controller-manager-pause-20220701225037-10065 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 85s
kube-system kube-proxy-2rj2j 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 73s
kube-system kube-scheduler-pause-20220701225037-10065 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 85s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 17s kube-proxy
Normal Starting 71s kube-proxy
Normal NodeHasSufficientMemory 97s (x5 over 97s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 97s (x4 over 97s) kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 97s (x4 over 97s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal Starting 85s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 85s kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 85s kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 85s kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 85s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 75s kubelet Node pause-20220701225037-10065 status is now: NodeReady
Normal RegisteredNode 73s node-controller Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller
Normal Starting 27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26s (x8 over 27s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26s (x8 over 27s) kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26s (x7 over 27s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 8s node-controller Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller
*
* ==> dmesg <==
* [ +0.007937] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000243e4d19
[ +0.008737] FS-Cache: N-key=[8] '8da00f0200000000'
[ +0.008910] FS-Cache: Duplicate cookie detected
[ +0.004855] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
[ +0.008102] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000ca7b3bc3
[ +0.008715] FS-Cache: O-key=[8] '8da00f0200000000'
[ +0.006303] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007941] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=0000000026c10e99
[ +0.008753] FS-Cache: N-key=[8] '8da00f0200000000'
[ +3.240272] FS-Cache: Duplicate cookie detected
[ +0.004683] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006838] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000a79cd091
[ +0.007363] FS-Cache: O-key=[8] '8ca00f0200000000'
[ +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.007950] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000243e4d19
[ +0.008729] FS-Cache: N-key=[8] '8ca00f0200000000'
[ +0.400056] FS-Cache: Duplicate cookie detected
[ +0.004681] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006754] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000697de9f9
[ +0.007371] FS-Cache: O-key=[8] '96a00f0200000000'
[ +0.004962] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007943] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000874d29a6
[ +0.008739] FS-Cache: N-key=[8] '96a00f0200000000'
[Jul 1 22:32] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul 1 22:51] process 'docker/tmp/qemu-check884735338/check' started with executable stack
*
* ==> etcd [7c466a26df78] <==
* {"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.562014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"204.925365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd6608161a7c1\" ","response":"range_response_count:1 size:695"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[807054841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:418; }","duration":"121.64982ms","start":"2022-07-01T22:52:35.605Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[807054841] 'agreement among raft nodes before linearized reading' (duration: 32.316474ms)","trace[807054841] 'range keys from in-memory index tree' (duration: 89.227071ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[249898954] range","detail":"{range_begin:/registry/events/default/pause-20220701225037-10065.16fdd6608161a7c1; range_end:; response_count:1; response_revision:418; }","duration":"204.99609ms","start":"2022-07-01T22:52:35.522Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[249898954] 'agreement among raft nodes before linearized reading' (duration: 115.629619ms)","trace[249898954] 'range keys from in-memory index tree' (duration: 89.247471ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.656864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6d4b75cb6d-9hr6m\" ","response":"range_response_count:1 size:4455"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[557320255] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6d4b75cb6d-9hr6m; range_end:; response_count:1; response_revision:418; }","duration":"112.797131ms","start":"2022-07-01T22:52:35.614Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[557320255] 'agreement among raft nodes before linearized reading' (duration: 23.361675ms)","trace[557320255] 'range keys from in-memory index tree' (duration: 89.267427ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.315074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:coredns\" ","response":"range_response_count:1 size:406"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[1041215956] range","detail":"{range_begin:/registry/clusterroles/system:coredns; range_end:; response_count:1; response_revision:418; }","duration":"254.609543ms","start":"2022-07-01T22:52:35.472Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[1041215956] 'agreement among raft nodes before linearized reading' (duration: 164.969271ms)","trace[1041215956] 'range keys from in-memory index tree' (duration: 89.3116ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.864Z","caller":"traceutil/trace.go:171","msg":"trace[1812694428] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"131.231877ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.864Z","steps":["trace[1812694428] 'read index received' (duration: 96.020022ms)","trace[1812694428] 'applied index is now lower than readState.Index' (duration: 35.21102ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.864Z","caller":"traceutil/trace.go:171","msg":"trace[231457023] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"131.68496ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.864Z","steps":["trace[231457023] 'process raft request' (duration: 96.464594ms)","trace[231457023] 'compare' (duration: 34.970597ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.865Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.37818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:417"}
{"level":"info","ts":"2022-07-01T22:52:35.865Z","caller":"traceutil/trace.go:171","msg":"trace[2129085806] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:421; }","duration":"131.788586ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.865Z","steps":["trace[2129085806] 'agreement among raft nodes before linearized reading' (duration: 131.338818ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T22:52:36.108Z","caller":"traceutil/trace.go:171","msg":"trace[1646082560] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:447; }","duration":"191.564873ms","start":"2022-07-01T22:52:35.916Z","end":"2022-07-01T22:52:36.108Z","steps":["trace[1646082560] 'read index received' (duration: 191.55564ms)","trace[1646082560] 'applied index is now lower than readState.Index' (duration: 7.985µs)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"290.939878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2rj2j\" ","response":"range_response_count:1 size:4440"}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.57946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd66081618d45\" ","response":"range_response_count:1 size:697"}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[280191980] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2rj2j; range_end:; response_count:1; response_revision:421; }","duration":"291.047415ms","start":"2022-07-01T22:52:36.014Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[280191980] 'agreement among raft nodes before linearized reading' (duration: 93.985949ms)","trace[280191980] 'range keys from in-memory index tree' (duration: 196.915818ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[1978112232] range","detail":"{range_begin:/registry/events/default/pause-20220701225037-10065.16fdd66081618d45; range_end:; response_count:1; response_revision:421; }","duration":"383.62573ms","start":"2022-07-01T22:52:35.921Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[1978112232] 'agreement among raft nodes before linearized reading' (duration: 186.546812ms)","trace[1978112232] 'range keys from in-memory index tree' (duration: 196.933245ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-01T22:52:35.921Z","time spent":"383.710242ms","remote":"127.0.0.1:44434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":721,"request content":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd66081618d45\" "}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"388.64521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4014"}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[507203427] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:421; }","duration":"388.991407ms","start":"2022-07-01T22:52:35.916Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[507203427] 'agreement among raft nodes before linearized reading' (duration: 191.664767ms)","trace[507203427] 'range keys from in-memory index tree' (duration: 196.93931ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-01T22:52:35.916Z","time spent":"389.070328ms","remote":"127.0.0.1:44544","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4038,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[211957152] linearizableReadLoop","detail":"{readStateIndex:451; appliedIndex:450; }","duration":"151.343056ms","start":"2022-07-01T22:52:36.410Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[211957152] 'read index received' (duration: 97.0169ms)","trace[211957152] 'applied index is now lower than readState.Index' (duration: 54.325616ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[1019660369] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"154.702012ms","start":"2022-07-01T22:52:36.407Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[1019660369] 'process raft request' (duration: 100.409573ms)","trace[1019660369] 'compare' (duration: 54.146656ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.561Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.593712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:118"}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[1826853412] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:425; }","duration":"151.664518ms","start":"2022-07-01T22:52:36.410Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[1826853412] 'agreement among raft nodes before linearized reading' (duration: 151.459022ms)"],"step_count":1}
*
* ==> etcd [b3907295cfb5] <==
*
*
* ==> kernel <==
* 22:52:54 up 35 min, 0 users, load average: 8.33, 3.72, 2.00
Linux pause-20220701225037-10065 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [3bdb4eff83b2] <==
*
*
* ==> kube-apiserver [6ef1f459e236] <==
* I0701 22:52:33.583767 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0701 22:52:33.583794 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0701 22:52:33.584066 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0701 22:52:33.584074 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0701 22:52:33.584116 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0701 22:52:33.594827 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
E0701 22:52:33.701380 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0701 22:52:33.714610 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0701 22:52:33.782263 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0701 22:52:33.782327 1 cache.go:39] Caches are synced for autoregister controller
I0701 22:52:33.782347 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0701 22:52:33.782807 1 shared_informer.go:262] Caches are synced for node_authorizer
I0701 22:52:33.782837 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0701 22:52:33.783831 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0701 22:52:33.784864 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0701 22:52:34.254569 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0701 22:52:34.570215 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0701 22:52:35.871223 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0701 22:52:35.915018 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0701 22:52:36.584990 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0701 22:52:36.603067 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0701 22:52:36.608721 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0701 22:52:37.038591 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0701 22:52:47.029542 1 controller.go:611] quota admission added evaluator for: endpoints
I0701 22:52:47.129474 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [49c9854851e7] <==
* I0701 22:52:46.936624 1 shared_informer.go:262] Caches are synced for taint
I0701 22:52:46.936708 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
I0701 22:52:46.936725 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W0701 22:52:46.936785 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220701225037-10065. Assuming now as a timestamp.
I0701 22:52:46.936820 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0701 22:52:46.936888 1 event.go:294] "Event occurred" object="pause-20220701225037-10065" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller"
I0701 22:52:46.941619 1 shared_informer.go:262] Caches are synced for ReplicationController
I0701 22:52:46.947874 1 shared_informer.go:262] Caches are synced for persistent volume
I0701 22:52:46.950186 1 shared_informer.go:262] Caches are synced for disruption
I0701 22:52:46.950207 1 disruption.go:371] Sending events to api server.
I0701 22:52:46.991055 1 shared_informer.go:262] Caches are synced for node
I0701 22:52:46.991114 1 range_allocator.go:173] Starting range CIDR allocator
I0701 22:52:46.991121 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0701 22:52:46.991132 1 shared_informer.go:262] Caches are synced for cidrallocator
I0701 22:52:46.998428 1 shared_informer.go:262] Caches are synced for GC
I0701 22:52:47.004581 1 shared_informer.go:262] Caches are synced for TTL
I0701 22:52:47.010824 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0701 22:52:47.024171 1 shared_informer.go:262] Caches are synced for daemon sets
I0701 22:52:47.029841 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:52:47.029942 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:52:47.040094 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0701 22:52:47.137501 1 shared_informer.go:262] Caches are synced for attach detach
I0701 22:52:47.545921 1 shared_informer.go:262] Caches are synced for garbage collector
I0701 22:52:47.545954 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0701 22:52:47.561098 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-controller-manager [df4b3f4ae16e] <==
*
*
* ==> kube-proxy [8c94216472f3] <==
* I0701 22:52:36.998570 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0701 22:52:36.998651 1 server_others.go:138] "Detected node IP" address="192.168.67.2"
I0701 22:52:36.998677 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0701 22:52:37.034527 1 server_others.go:206] "Using iptables Proxier"
I0701 22:52:37.034574 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0701 22:52:37.034587 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0701 22:52:37.034613 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0701 22:52:37.034642 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:52:37.034781 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:52:37.035020 1 server.go:661] "Version info" version="v1.24.2"
I0701 22:52:37.035041 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:52:37.035935 1 config.go:226] "Starting endpoint slice config controller"
I0701 22:52:37.035966 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0701 22:52:37.035965 1 config.go:317] "Starting service config controller"
I0701 22:52:37.035977 1 shared_informer.go:255] Waiting for caches to sync for service config
I0701 22:52:37.036023 1 config.go:444] "Starting node config controller"
I0701 22:52:37.036029 1 shared_informer.go:255] Waiting for caches to sync for node config
I0701 22:52:37.136042 1 shared_informer.go:262] Caches are synced for service config
I0701 22:52:37.136071 1 shared_informer.go:262] Caches are synced for node config
I0701 22:52:37.136085 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [a18b374894e4] <==
*
*
* ==> kube-scheduler [0bd8d7e9873d] <==
* I0701 22:52:30.289677 1 serving.go:348] Generated self-signed cert in-memory
W0701 22:52:33.604603 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0701 22:52:33.604642 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0701 22:52:33.604654 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0701 22:52:33.604663 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0701 22:52:33.704410 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
I0701 22:52:33.704444 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:52:33.706058 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0701 22:52:33.706295 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0701 22:52:33.706319 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 22:52:33.706351 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0701 22:52:33.806824 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [27d94ed4dff4] <==
*
*
* ==> kubelet <==
* -- Logs begin at Fri 2022-07-01 22:51:03 UTC, end at Fri 2022-07-01 22:52:55 UTC. --
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.355649 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.456256 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.556881 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.657874 5509 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.685617 5509 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.726373 5509 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220701225037-10065"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.726476 5509 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220701225037-10065"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.799697 5509 apiserver.go:52] "Watching apiserver"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.806286 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.808794 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905427 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4427a6a7-009f-4357-8c8a-fedbba15c52e-lib-modules\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905474 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpdl\" (UniqueName: \"kubernetes.io/projected/4427a6a7-009f-4357-8c8a-fedbba15c52e-kube-api-access-4qpdl\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905500 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4427a6a7-009f-4357-8c8a-fedbba15c52e-xtables-lock\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905538 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4427a6a7-009f-4357-8c8a-fedbba15c52e-kube-proxy\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006270 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4-config-volume\") pod \"coredns-6d4b75cb6d-9hr6m\" (UID: \"213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4\") " pod="kube-system/coredns-6d4b75cb6d-9hr6m"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006552 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjh4\" (UniqueName: \"kubernetes.io/projected/213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4-kube-api-access-hgjh4\") pod \"coredns-6d4b75cb6d-9hr6m\" (UID: \"213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4\") " pod="kube-system/coredns-6d4b75cb6d-9hr6m"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006598 5509 reconciler.go:157] "Reconciler: start to sync state"
Jul 01 22:52:36 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:36.761091 5509 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="196d6896c1e45fc0dcd7b5ebc721dd381181b03be239a4bbd01b09c22e258cc1"
Jul 01 22:52:36 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:36.982885 5509 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3b9491065a997758567c4ce62b106caef6a5ad7c61704da54b5ec635e24000a5"
Jul 01 22:52:39 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:39.032005 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:40 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:40.037454 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:42 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:42.266775 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.446424 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.631797 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/54985022-a6cd-4c59-af65-805d97e94819-tmp\") pod \"storage-provisioner\" (UID: \"54985022-a6cd-4c59-af65-805d97e94819\") " pod="kube-system/storage-provisioner"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.631860 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9lq\" (UniqueName: \"kubernetes.io/projected/54985022-a6cd-4c59-af65-805d97e94819-kube-api-access-jp9lq\") pod \"storage-provisioner\" (UID: \"54985022-a6cd-4c59-af65-805d97e94819\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [bc8eb3b85df9] <==
* I0701 22:52:51.067643 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0701 22:52:51.076653 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0701 22:52:51.076696 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0701 22:52:51.091071 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0701 22:52:51.091149 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e9623bf-a8d3-4559-bdf3-e0cb6f256a1f", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35 became leader
I0701 22:52:51.091195 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35!
I0701 22:52:51.191919 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220701225037-10065 -n pause-20220701225037-10065
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220701225037-10065 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220701225037-10065 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220701225037-10065 describe pod : exit status 1 (50.537904ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220701225037-10065 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-20220701225037-10065
helpers_test.go:235: (dbg) docker inspect pause-20220701225037-10065:
-- stdout --
[
{
"Id": "6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333",
"Created": "2022-07-01T22:51:02.300245205Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 179327,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-01T22:51:02.896812642Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
"ResolvConfPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/hostname",
"HostsPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/hosts",
"LogPath": "/var/lib/docker/containers/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333/6fdd3cdb5625721beeb61d0b64ed4631627d88fccb6ebc36823697d95b406333-json.log",
"Name": "/pause-20220701225037-10065",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20220701225037-10065:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20220701225037-10065",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10-init/diff:/var/lib/docker/overlay2/5ebfdebe4e44329780b59de0c8b0bf968018a1b5ca93874176282e5d48d8b4db/diff:/var/lib/docker/overlay2/a9c49a4053225d2ebd241f2745fdb18a43efefbab32cf6272e8d78d867eb875b/diff:/var/lib/docker/overlay2/0802fcdede86d6719023f3e2022375a3aa09436b5608d6e9fb20c7b421c40a53/diff:/var/lib/docker/overlay2/fefaf4af75695592d1b8cb09d5ce0ff06c9ec6ca405ab6634301a6372ff90fb0/diff:/var/lib/docker/overlay2/523e8a6a67ba3ae86bf70befa1ddfb565da9d640492c57c39213b01eae3c45bb/diff:/var/lib/docker/overlay2/01825d9999ae652487354dbb98195f290170df60b824e0640efbad3b57057fe5/diff:/var/lib/docker/overlay2/aef6dee284ba27a78360c962618cc5f5921d5df9d4f9cee3e1e05aa7385cae2e/diff:/var/lib/docker/overlay2/d09388e767dcebde123a39eb77d2767b334ffed162b0013c549be8cfafaf32f1/diff:/var/lib/docker/overlay2/a961c54dbc25723780f6af5c7257c9131c92c20cbae5fdb309a448a04177fb0d/diff:/var/lib/docker/overlay2/070954
da53f10d594c7db858ceee957f45dcc290e20fd38e5d2ae3ee6d32a509/diff:/var/lib/docker/overlay2/cf6729cace23a11c96ef50c2039fbe915ea3375a5eea2cc505a97ee37144f10b/diff:/var/lib/docker/overlay2/bb5aa1c8e98214b00a8ca54e8c73310989228e658165d383142a35f522afd361/diff:/var/lib/docker/overlay2/a47fe538fad9a10ad54bda1ed2c2db3d6f7515279f5793af55de9b603f32cc38/diff:/var/lib/docker/overlay2/7aa9fa6b1d74c93745eb01c008d86447d556fffffec606a6517ddd7debc0e0ce/diff:/var/lib/docker/overlay2/105c0e50338102d95508115a573be5ad60e7ce3c240dfa4925d2485bd7550ff1/diff:/var/lib/docker/overlay2/c635bf001d9cfba6946f0a7acd8a209d33c7a4fd24004260b9674c2f4cfe3225/diff:/var/lib/docker/overlay2/5b7b2968c2b74d88b68c69896db41b100a7b4f657c4847b630d3b6385435c736/diff:/var/lib/docker/overlay2/00e793fd0209aee8ea522c9f888a1504bdf3f110a6b59767117491d2f73ded51/diff:/var/lib/docker/overlay2/06582d415f14a950df0d932d005adba6b7bdef9b03e7ec96cd9ee0f3e4f88186/diff:/var/lib/docker/overlay2/d90b5a2b218ac3ce4ee84214f7cc5d9f0cfb4de5cceb562de24197fc3fe97252/diff:/var/lib/d
ocker/overlay2/1d6b6e5d2af72440a4ffe851359e0fcd180b6230c1bbdc6471e1e311550d2af8/diff:/var/lib/docker/overlay2/43098fdc498ae414f4e85d3f2ad689f15233c4149f38411bcdde8c0c6858b45a/diff:/var/lib/docker/overlay2/3dee36596b8505375a1dbe51da977c260f679f20a286b38a4f47fb94bf95483e/diff:/var/lib/docker/overlay2/4365a3944f40a62fd04dc6c3a1f6fc50b645e83950cb5f65afd99ae47b29dcf9/diff:/var/lib/docker/overlay2/10d86d22181d1ff7d3cf42653b6656d6d4e285c1fc95f4a0e3b228c23cf01c2a/diff:/var/lib/docker/overlay2/adba91f6364e8d3eafcc2f1921be64caa35af120fd78598b34158330f1b07c11/diff:/var/lib/docker/overlay2/b11dac8829c82d605c4c9aa2e461e88f5c53fe9ea03f0346a29a84006b96572f/diff:/var/lib/docker/overlay2/a8542b5e868fc08d56cebacdbc3ac16bef43ba9dbb70582466e031f13e2e369c/diff:/var/lib/docker/overlay2/5a7d32bcfb9e1f040b36571d7c2cb9c85eeba09cbc900808cb340a0690d76b53/diff:/var/lib/docker/overlay2/39b83f88bb66f5b127c544d4e4c52cb02acef43dc7d39d5c1739766c7a412049/diff:/var/lib/docker/overlay2/aa7e1d59944cb05594b182c96ad9e4e96d2caf7b22b208ace35452f0017
0f188/diff:/var/lib/docker/overlay2/9428da6997644cd26c066788b084b9abf00b4fbcab734b62b5e135ce3c26e6c6/diff:/var/lib/docker/overlay2/8e5398d669dc8937e39f7dd4dd9fe88f23d8d0408bb7e88f2fcf26f277e57ed7/diff:/var/lib/docker/overlay2/b1ca9bb6fe129d779d179c77dc675fe715e3efe2089cd22455f23076ea6d09e5/diff:/var/lib/docker/overlay2/f8dcad825e8399dc23061b3c8e0ed4867960cdfc9c50a08f2151920b070b150e/diff:/var/lib/docker/overlay2/4b5dcd090442aa9f2a952de485202e6da12be1f754edcc4bb1e179d651d71fc6/diff:/var/lib/docker/overlay2/23101e237652ba79b16635a2274893cd7e3ddf64fed56ef622669a79653e325b/diff:/var/lib/docker/overlay2/0c0e5d0c6ae6c618678469f0a52205dd4f46a14aded01fdcea8aa29f7a5ef810/diff:/var/lib/docker/overlay2/fdc530c0025cfd7b5d7995c60e81f48e9e8b53dacc5ce33a06c63ea380ab7364/diff:/var/lib/docker/overlay2/b88e0fc2e685a4af24fb7b1bd918a66cf2b17d9e94befd1a58d79580164b5002/diff:/var/lib/docker/overlay2/e7d090aef23d3aafdc818f796a577e07c009fae5593337bee3b45a27008c9b8f/diff",
"MergedDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/merged",
"UpperDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/diff",
"WorkDir": "/var/lib/docker/overlay2/de2850ddba24e6c81e58a5b27b79ff76542340f48f851abbd07613669fc88f10/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20220701225037-10065",
"Source": "/var/lib/docker/volumes/pause-20220701225037-10065/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20220701225037-10065",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20220701225037-10065",
"name.minikube.sigs.k8s.io": "pause-20220701225037-10065",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "72e452ed1271ed755b19576efdfb23a166e0e8112cb1fa2665eea6db99922b76",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49302"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49301"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49296"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49300"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49298"
}
]
},
"SandboxKey": "/var/run/docker/netns/72e452ed1271",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20220701225037-10065": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"6fdd3cdb5625",
"pause-20220701225037-10065"
],
"NetworkID": "3b47d047354072632c767ea8f5e73418621d396187310dd665246be007cd885d",
"EndpointID": "2035834098a4d2a78b597653e3d023565d83a47ebdad2e54bd19a457098a3cfe",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220701225037-10065 -n pause-20220701225037-10065
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-20220701225037-10065 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220701225037-10065 logs -n 25: (4.355995966s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | |
| | insufficient-storage-20220701225024-10065 | | | | | |
| | --memory=2048 --output=json | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:50 UTC |
| | insufficient-storage-20220701225024-10065 | | | | | |
| start | -p pause-20220701225037-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | offline-docker-20220701225037-10065 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:50 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | stopped-upgrade-20220701225037-10065 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| profile | list | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | offline-docker-20220701225037-10065 | | | | | |
| start | -p pause-20220701225037-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| profile | list --output=json | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| stop | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:51 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:51 UTC | 01 Jul 22 22:52 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | NoKubernetes-20220701225037-10065 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | NoKubernetes-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | kubernetes-upgrade-20220701225208-10065 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | stopped-upgrade-20220701225037-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | force-systemd-flag-20220701225213-10065 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-flag-20220701225213-10065 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | 01 Jul 22 22:52 UTC |
| | force-systemd-flag-20220701225213-10065 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:52 UTC | |
| | force-systemd-env-20220701225253-10065 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
|---------|-------------------------------------------|----------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/01 22:52:53
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 22:52:53.940921 221539 out.go:296] Setting OutFile to fd 1 ...
I0701 22:52:53.941123 221539 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:53.941134 221539 out.go:309] Setting ErrFile to fd 2...
I0701 22:52:53.941141 221539 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:53.941546 221539 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
I0701 22:52:53.941834 221539 out.go:303] Setting JSON to false
I0701 22:52:53.943971 221539 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2126,"bootTime":1656713848,"procs":1039,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 22:52:53.944054 221539 start.go:125] virtualization: kvm guest
I0701 22:52:53.946667 221539 out.go:177] * [force-systemd-env-20220701225253-10065] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0701 22:52:53.948307 221539 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:52:53.948234 221539 notify.go:193] Checking for updates...
I0701 22:52:53.951115 221539 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:52:53.952800 221539 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:53.954259 221539 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
I0701 22:52:53.955763 221539 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 22:52:53.957164 221539 out.go:177] - MINIKUBE_FORCE_SYSTEMD=true
I0701 22:52:53.958983 221539 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225208-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0701 22:52:53.959082 221539 config.go:178] Loaded profile config "missing-upgrade-20220701225156-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
I0701 22:52:53.959196 221539 config.go:178] Loaded profile config "pause-20220701225037-10065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0701 22:52:53.959247 221539 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:52:54.014071 221539 docker.go:137] docker version: linux-20.10.17
I0701 22:52:54.014182 221539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:54.170485 221539 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:69 SystemTime:2022-07-01 22:52:54.056497508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:54.170618 221539 docker.go:254] overlay module found
I0701 22:52:54.172719 221539 out.go:177] * Using the docker driver based on user configuration
I0701 22:52:54.174052 221539 start.go:284] selected driver: docker
I0701 22:52:54.174068 221539 start.go:808] validating driver "docker" against <nil>
I0701 22:52:54.174092 221539 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:52:54.175084 221539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:54.355509 221539 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:71 SystemTime:2022-07-01 22:52:54.21665129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:54.355668 221539 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0701 22:52:54.355904 221539 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
I0701 22:52:54.358061 221539 out.go:177] * Using Docker driver with root privileges
I0701 22:52:54.359559 221539 cni.go:95] Creating CNI manager for ""
I0701 22:52:54.359589 221539 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0701 22:52:54.359599 221539 start_flags.go:310] config:
{Name:force-systemd-env-20220701225253-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:force-systemd-env-20220701225253-10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:54.361347 221539 out.go:177] * Starting control plane node force-systemd-env-20220701225253-10065 in cluster force-systemd-env-20220701225253-10065
I0701 22:52:54.362675 221539 cache.go:120] Beginning downloading kic base image for docker with docker
I0701 22:52:54.364139 221539 out.go:177] * Pulling base image ...
I0701 22:52:54.365985 221539 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0701 22:52:54.366033 221539 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
I0701 22:52:54.366048 221539 cache.go:57] Caching tarball of preloaded images
I0701 22:52:54.366096 221539 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
I0701 22:52:54.366301 221539 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 22:52:54.366321 221539 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on docker
I0701 22:52:54.366459 221539 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-env-20220701225253-10065/config.json ...
I0701 22:52:54.366493 221539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14483-3521-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/force-systemd-env-20220701225253-10065/config.json: {Name:mkdf921fdd033089ff1e72d0c7fe51517d2eebaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:54.424086 221539 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
I0701 22:52:54.424115 221539 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
I0701 22:52:54.424135 221539 cache.go:208] Successfully downloaded all kic artifacts
I0701 22:52:54.424179 221539 start.go:352] acquiring machines lock for force-systemd-env-20220701225253-10065: {Name:mkd28e06039edc092fe82363c31554954f002d25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 22:52:54.424336 221539 start.go:356] acquired machines lock for "force-systemd-env-20220701225253-10065" in 128.07µs
I0701 22:52:54.424371 221539 start.go:91] Provisioning new machine with config: &{Name:force-systemd-env-20220701225253-10065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:force-systemd-env-20220701225253-
10065 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 22:52:54.424454 221539 start.go:131] createHost starting for "" (driver="docker")
*
* ==> Docker <==
* -- Logs begin at Fri 2022-07-01 22:51:03 UTC, end at Fri 2022-07-01 22:52:58 UTC. --
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.436807852Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 601cfcfd8dc963a50ae23399869791feac0e98819f1a13a2301a351d0489cfbb 513f8c5aa07a106e4472168e93ec97cf702e15f6ff9fe8631d6e192d22d8d0dd], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.620907136Z" level=info msg="Removing stale sandbox 30d2f41f10c2965a9f8311617b688f9f369d6bcd7a9a4dc0c5c0bc1fa85ffa76 (12657b5aa4cd0cf9943e5bf389c91bc43e58c3d8658b7507ac9f1592418144b6)"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.622776511Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b2bba9bb4e19c1ae40c3535ae411107621843192a02b6ed2325d84c4d326142d edc5062a01e1b18d3cbd9a523749df608c075bd6a75d0a4408044dec65a1b26f], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.800936200Z" level=info msg="Removing stale sandbox 74b348bd57b6165ab4e8c030924c6f0333b61af9afddeb1f2e120fe152838ccc (bcdc199fee91b0769e4432e861b58c794cba9ccd7abfca3700fc250500081fbf)"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.802857009Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b2bba9bb4e19c1ae40c3535ae411107621843192a02b6ed2325d84c4d326142d 9309beca9771ba42e738bba3079d7319ada816e2a39ceacf292dc690041e581f], retrying...."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.853658512Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.922143618Z" level=info msg="Loading containers: done."
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.951379980Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.951507271Z" level=info msg="Daemon has completed initialization"
Jul 01 22:52:19 pause-20220701225037-10065 systemd[1]: Started Docker Application Container Engine.
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.985269185Z" level=info msg="API listen on [::]:2376"
Jul 01 22:52:19 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:19.989782419Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 22:52:20 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:20.203475255Z" level=error msg="Failed to compute size of container rootfs 93fd92f11bbf0488a7b7410f273560237f99f2ab49e5f6dd554334def47ced33: mount does not exist"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.038995340Z" level=info msg="ignoring event" container=14a43d53786097a882d293a59018789aea5c352662b761659de462d52f9ec4c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.040180861Z" level=info msg="ignoring event" container=6ae728ad062b9982acb1ce89649399fcb89ef3d9bbef8b2d130bab636b6d5175 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.082999327Z" level=info msg="ignoring event" container=58283dd133ae642b405ee3c7aeaa7c6db15709b0fb84eb80b0bf56b1260ddbfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.083030975Z" level=info msg="ignoring event" container=1489f2e1da5fa5d9faed742a3153289625b23203233bb4f1b1d668862dbef857 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.084007935Z" level=info msg="ignoring event" container=fcb211cabc39a2f6cb16c9f7023cbc86dd153a3d8e72e4251ee5ded8f6f7e88b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.101595019Z" level=info msg="ignoring event" container=d21e2232a42b9cccd8b0dc9bccb3c1a4ec85d8b65e22d14d3ffe1a63861e7933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.947364822Z" level=error msg="27d94ed4dff426c6a071de822f0be84b04cd233d1d857c244036a485ab459ed3 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.948395327Z" level=error msg="3bdb4eff83b2a1bba0917dc8286d21eefbf084fe40433829315797631d0abc61 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.948555303Z" level=error msg="df4b3f4ae16edee7a124d491027b694991bfcee9919abafc85fc4a5ebbe07d46 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.950094192Z" level=error msg="a18b374894e46673d1a54815b05cff205760f02f7e2d3e92e402daf36b9aa0cb cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:25 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:25.950207135Z" level=error msg="b3907295cfb5852b50dc450d0790f8cbbb47f7ea564986012bc2b042314de154 cleanup: failed to delete container from containerd: no such container"
Jul 01 22:52:26 pause-20220701225037-10065 dockerd[4133]: time="2022-07-01T22:52:26.152236859Z" level=error msg="7706b293b404cdc30fe1b6f1ae71369c4e88288ffac286c0e7ecf65d46849390 cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
bc8eb3b85df95 6e38f40d628db 8 seconds ago Running storage-provisioner 0 d316c5bbd2020
7b4f3ab61f2ac a4ca41631cc7a 21 seconds ago Running coredns 3 3b9491065a997
8c94216472f33 a634548d10b03 22 seconds ago Running kube-proxy 3 196d6896c1e45
7c466a26df783 aebe758cef4cd 30 seconds ago Running etcd 3 d9afa4105dbb8
6ef1f459e2361 d3377ffb7177c 30 seconds ago Running kube-apiserver 2 33f21d943ba1c
49c9854851e7e 34cdf99b1bb3b 30 seconds ago Running kube-controller-manager 3 d836c4bd13bbb
0bd8d7e9873d2 5d725196c1f47 30 seconds ago Running kube-scheduler 2 e1bf6726a1549
7706b293b404c a4ca41631cc7a 37 seconds ago Created coredns 2 fcb211cabc39a
b3907295cfb58 aebe758cef4cd 37 seconds ago Created etcd 2 d21e2232a42b9
27d94ed4dff42 5d725196c1f47 37 seconds ago Created kube-scheduler 1 6ae728ad062b9
3bdb4eff83b2a d3377ffb7177c 37 seconds ago Created kube-apiserver 1 14a43d5378609
df4b3f4ae16ed 34cdf99b1bb3b 37 seconds ago Created kube-controller-manager 2 58283dd133ae6
a18b374894e46 a634548d10b03 38 seconds ago Created kube-proxy 2 1489f2e1da5fa
*
* ==> coredns [7706b293b404] <==
*
*
* ==> coredns [7b4f3ab61f2a] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: pause-20220701225037-10065
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220701225037-10065
kubernetes.io/os=linux
minikube.k8s.io/commit=a9d0dc9dee163ffb569dd54a2ee17668627fbc04
minikube.k8s.io/name=pause-20220701225037-10065
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_01T22_51_29_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 01 Jul 2022 22:51:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220701225037-10065
AcquireTime: <unset>
RenewTime: Fri, 01 Jul 2022 22:52:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 01 Jul 2022 22:52:33 +0000 Fri, 01 Jul 2022 22:51:39 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-20220701225037-10065
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
System Info:
Machine ID: bbe1e1cef6e940328962dca52b3c5731
System UUID: 6a2b12fa-900c-459e-8bef-f54d21d18140
Boot ID: b4d8e8fa-97b6-4834-a1fb-bac1d7a9adea
Kernel Version: 5.15.0-1012-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-9hr6m 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 76s
kube-system etcd-pause-20220701225037-10065 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-apiserver-pause-20220701225037-10065 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-controller-manager-pause-20220701225037-10065 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-proxy-2rj2j 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 77s
kube-system kube-scheduler-pause-20220701225037-10065 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21s kube-proxy
Normal Starting 75s kube-proxy
Normal NodeHasSufficientMemory 101s (x5 over 101s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 101s (x4 over 101s) kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 101s (x4 over 101s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal Starting 89s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 89s kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 89s kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 89s kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 89s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 79s kubelet Node pause-20220701225037-10065 status is now: NodeReady
Normal RegisteredNode 77s node-controller Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller
Normal Starting 31s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 30s (x8 over 31s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 30s (x8 over 31s) kubelet Node pause-20220701225037-10065 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 30s (x7 over 31s) kubelet Node pause-20220701225037-10065 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 30s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 12s node-controller Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller
*
* ==> dmesg <==
* [ +0.007937] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000243e4d19
[ +0.008737] FS-Cache: N-key=[8] '8da00f0200000000'
[ +0.008910] FS-Cache: Duplicate cookie detected
[ +0.004855] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
[ +0.008102] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000ca7b3bc3
[ +0.008715] FS-Cache: O-key=[8] '8da00f0200000000'
[ +0.006303] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007941] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=0000000026c10e99
[ +0.008753] FS-Cache: N-key=[8] '8da00f0200000000'
[ +3.240272] FS-Cache: Duplicate cookie detected
[ +0.004683] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006838] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000a79cd091
[ +0.007363] FS-Cache: O-key=[8] '8ca00f0200000000'
[ +0.004958] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.007950] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000243e4d19
[ +0.008729] FS-Cache: N-key=[8] '8ca00f0200000000'
[ +0.400056] FS-Cache: Duplicate cookie detected
[ +0.004681] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006754] FS-Cache: O-cookie d=000000009fe4b6b3{9p.inode} n=00000000697de9f9
[ +0.007371] FS-Cache: O-key=[8] '96a00f0200000000'
[ +0.004962] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007943] FS-Cache: N-cookie d=000000009fe4b6b3{9p.inode} n=00000000874d29a6
[ +0.008739] FS-Cache: N-key=[8] '96a00f0200000000'
[Jul 1 22:32] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul 1 22:51] process 'docker/tmp/qemu-check884735338/check' started with executable stack
*
* ==> etcd [7c466a26df78] <==
* {"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.562014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"204.925365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd6608161a7c1\" ","response":"range_response_count:1 size:695"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[807054841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:418; }","duration":"121.64982ms","start":"2022-07-01T22:52:35.605Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[807054841] 'agreement among raft nodes before linearized reading' (duration: 32.316474ms)","trace[807054841] 'range keys from in-memory index tree' (duration: 89.227071ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[249898954] range","detail":"{range_begin:/registry/events/default/pause-20220701225037-10065.16fdd6608161a7c1; range_end:; response_count:1; response_revision:418; }","duration":"204.99609ms","start":"2022-07-01T22:52:35.522Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[249898954] 'agreement among raft nodes before linearized reading' (duration: 115.629619ms)","trace[249898954] 'range keys from in-memory index tree' (duration: 89.247471ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.656864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6d4b75cb6d-9hr6m\" ","response":"range_response_count:1 size:4455"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[557320255] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6d4b75cb6d-9hr6m; range_end:; response_count:1; response_revision:418; }","duration":"112.797131ms","start":"2022-07-01T22:52:35.614Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[557320255] 'agreement among raft nodes before linearized reading' (duration: 23.361675ms)","trace[557320255] 'range keys from in-memory index tree' (duration: 89.267427ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.726Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.315074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:coredns\" ","response":"range_response_count:1 size:406"}
{"level":"info","ts":"2022-07-01T22:52:35.727Z","caller":"traceutil/trace.go:171","msg":"trace[1041215956] range","detail":"{range_begin:/registry/clusterroles/system:coredns; range_end:; response_count:1; response_revision:418; }","duration":"254.609543ms","start":"2022-07-01T22:52:35.472Z","end":"2022-07-01T22:52:35.727Z","steps":["trace[1041215956] 'agreement among raft nodes before linearized reading' (duration: 164.969271ms)","trace[1041215956] 'range keys from in-memory index tree' (duration: 89.3116ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.864Z","caller":"traceutil/trace.go:171","msg":"trace[1812694428] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"131.231877ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.864Z","steps":["trace[1812694428] 'read index received' (duration: 96.020022ms)","trace[1812694428] 'applied index is now lower than readState.Index' (duration: 35.21102ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:35.864Z","caller":"traceutil/trace.go:171","msg":"trace[231457023] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"131.68496ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.864Z","steps":["trace[231457023] 'process raft request' (duration: 96.464594ms)","trace[231457023] 'compare' (duration: 34.970597ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:35.865Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"131.37818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:417"}
{"level":"info","ts":"2022-07-01T22:52:35.865Z","caller":"traceutil/trace.go:171","msg":"trace[2129085806] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:421; }","duration":"131.788586ms","start":"2022-07-01T22:52:35.733Z","end":"2022-07-01T22:52:35.865Z","steps":["trace[2129085806] 'agreement among raft nodes before linearized reading' (duration: 131.338818ms)"],"step_count":1}
{"level":"info","ts":"2022-07-01T22:52:36.108Z","caller":"traceutil/trace.go:171","msg":"trace[1646082560] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:447; }","duration":"191.564873ms","start":"2022-07-01T22:52:35.916Z","end":"2022-07-01T22:52:36.108Z","steps":["trace[1646082560] 'read index received' (duration: 191.55564ms)","trace[1646082560] 'applied index is now lower than readState.Index' (duration: 7.985µs)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"290.939878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2rj2j\" ","response":"range_response_count:1 size:4440"}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.57946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd66081618d45\" ","response":"range_response_count:1 size:697"}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[280191980] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2rj2j; range_end:; response_count:1; response_revision:421; }","duration":"291.047415ms","start":"2022-07-01T22:52:36.014Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[280191980] 'agreement among raft nodes before linearized reading' (duration: 93.985949ms)","trace[280191980] 'range keys from in-memory index tree' (duration: 196.915818ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[1978112232] range","detail":"{range_begin:/registry/events/default/pause-20220701225037-10065.16fdd66081618d45; range_end:; response_count:1; response_revision:421; }","duration":"383.62573ms","start":"2022-07-01T22:52:35.921Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[1978112232] 'agreement among raft nodes before linearized reading' (duration: 186.546812ms)","trace[1978112232] 'range keys from in-memory index tree' (duration: 196.933245ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-01T22:52:35.921Z","time spent":"383.710242ms","remote":"127.0.0.1:44434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":721,"request content":"key:\"/registry/events/default/pause-20220701225037-10065.16fdd66081618d45\" "}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"388.64521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4014"}
{"level":"info","ts":"2022-07-01T22:52:36.305Z","caller":"traceutil/trace.go:171","msg":"trace[507203427] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:421; }","duration":"388.991407ms","start":"2022-07-01T22:52:35.916Z","end":"2022-07-01T22:52:36.305Z","steps":["trace[507203427] 'agreement among raft nodes before linearized reading' (duration: 191.664767ms)","trace[507203427] 'range keys from in-memory index tree' (duration: 196.93931ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-01T22:52:35.916Z","time spent":"389.070328ms","remote":"127.0.0.1:44544","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4038,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[211957152] linearizableReadLoop","detail":"{readStateIndex:451; appliedIndex:450; }","duration":"151.343056ms","start":"2022-07-01T22:52:36.410Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[211957152] 'read index received' (duration: 97.0169ms)","trace[211957152] 'applied index is now lower than readState.Index' (duration: 54.325616ms)"],"step_count":2}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[1019660369] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"154.702012ms","start":"2022-07-01T22:52:36.407Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[1019660369] 'process raft request' (duration: 100.409573ms)","trace[1019660369] 'compare' (duration: 54.146656ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-01T22:52:36.561Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.593712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:118"}
{"level":"info","ts":"2022-07-01T22:52:36.561Z","caller":"traceutil/trace.go:171","msg":"trace[1826853412] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:425; }","duration":"151.664518ms","start":"2022-07-01T22:52:36.410Z","end":"2022-07-01T22:52:36.561Z","steps":["trace[1826853412] 'agreement among raft nodes before linearized reading' (duration: 151.459022ms)"],"step_count":1}
*
* ==> etcd [b3907295cfb5] <==
*
*
* ==> kernel <==
* 22:52:59 up 35 min, 0 users, load average: 7.98, 3.73, 2.01
Linux pause-20220701225037-10065 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [3bdb4eff83b2] <==
*
*
* ==> kube-apiserver [6ef1f459e236] <==
* I0701 22:52:33.583767 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0701 22:52:33.583794 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0701 22:52:33.584066 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0701 22:52:33.584074 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0701 22:52:33.584116 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0701 22:52:33.594827 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
E0701 22:52:33.701380 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0701 22:52:33.714610 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0701 22:52:33.782263 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0701 22:52:33.782327 1 cache.go:39] Caches are synced for autoregister controller
I0701 22:52:33.782347 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0701 22:52:33.782807 1 shared_informer.go:262] Caches are synced for node_authorizer
I0701 22:52:33.782837 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0701 22:52:33.783831 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0701 22:52:33.784864 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0701 22:52:34.254569 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0701 22:52:34.570215 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0701 22:52:35.871223 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0701 22:52:35.915018 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0701 22:52:36.584990 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0701 22:52:36.603067 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0701 22:52:36.608721 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0701 22:52:37.038591 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0701 22:52:47.029542 1 controller.go:611] quota admission added evaluator for: endpoints
I0701 22:52:47.129474 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [49c9854851e7] <==
* I0701 22:52:46.936624 1 shared_informer.go:262] Caches are synced for taint
I0701 22:52:46.936708 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
I0701 22:52:46.936725 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W0701 22:52:46.936785 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220701225037-10065. Assuming now as a timestamp.
I0701 22:52:46.936820 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0701 22:52:46.936888 1 event.go:294] "Event occurred" object="pause-20220701225037-10065" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220701225037-10065 event: Registered Node pause-20220701225037-10065 in Controller"
I0701 22:52:46.941619 1 shared_informer.go:262] Caches are synced for ReplicationController
I0701 22:52:46.947874 1 shared_informer.go:262] Caches are synced for persistent volume
I0701 22:52:46.950186 1 shared_informer.go:262] Caches are synced for disruption
I0701 22:52:46.950207 1 disruption.go:371] Sending events to api server.
I0701 22:52:46.991055 1 shared_informer.go:262] Caches are synced for node
I0701 22:52:46.991114 1 range_allocator.go:173] Starting range CIDR allocator
I0701 22:52:46.991121 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0701 22:52:46.991132 1 shared_informer.go:262] Caches are synced for cidrallocator
I0701 22:52:46.998428 1 shared_informer.go:262] Caches are synced for GC
I0701 22:52:47.004581 1 shared_informer.go:262] Caches are synced for TTL
I0701 22:52:47.010824 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0701 22:52:47.024171 1 shared_informer.go:262] Caches are synced for daemon sets
I0701 22:52:47.029841 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:52:47.029942 1 shared_informer.go:262] Caches are synced for resource quota
I0701 22:52:47.040094 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0701 22:52:47.137501 1 shared_informer.go:262] Caches are synced for attach detach
I0701 22:52:47.545921 1 shared_informer.go:262] Caches are synced for garbage collector
I0701 22:52:47.545954 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0701 22:52:47.561098 1 shared_informer.go:262] Caches are synced for garbage collector
*
* ==> kube-controller-manager [df4b3f4ae16e] <==
*
*
* ==> kube-proxy [8c94216472f3] <==
* I0701 22:52:36.998570 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0701 22:52:36.998651 1 server_others.go:138] "Detected node IP" address="192.168.67.2"
I0701 22:52:36.998677 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0701 22:52:37.034527 1 server_others.go:206] "Using iptables Proxier"
I0701 22:52:37.034574 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0701 22:52:37.034587 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0701 22:52:37.034613 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0701 22:52:37.034642 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:52:37.034781 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0701 22:52:37.035020 1 server.go:661] "Version info" version="v1.24.2"
I0701 22:52:37.035041 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:52:37.035935 1 config.go:226] "Starting endpoint slice config controller"
I0701 22:52:37.035966 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0701 22:52:37.035965 1 config.go:317] "Starting service config controller"
I0701 22:52:37.035977 1 shared_informer.go:255] Waiting for caches to sync for service config
I0701 22:52:37.036023 1 config.go:444] "Starting node config controller"
I0701 22:52:37.036029 1 shared_informer.go:255] Waiting for caches to sync for node config
I0701 22:52:37.136042 1 shared_informer.go:262] Caches are synced for service config
I0701 22:52:37.136071 1 shared_informer.go:262] Caches are synced for node config
I0701 22:52:37.136085 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-proxy [a18b374894e4] <==
*
*
* ==> kube-scheduler [0bd8d7e9873d] <==
* I0701 22:52:30.289677 1 serving.go:348] Generated self-signed cert in-memory
W0701 22:52:33.604603 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0701 22:52:33.604642 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0701 22:52:33.604654 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0701 22:52:33.604663 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0701 22:52:33.704410 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
I0701 22:52:33.704444 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 22:52:33.706058 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0701 22:52:33.706295 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0701 22:52:33.706319 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 22:52:33.706351 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0701 22:52:33.806824 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [27d94ed4dff4] <==
*
*
* ==> kubelet <==
* -- Logs begin at Fri 2022-07-01 22:51:03 UTC, end at Fri 2022-07-01 22:53:00 UTC. --
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.355649 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.456256 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: E0701 22:52:33.556881 5509 kubelet.go:2424] "Error getting node" err="node \"pause-20220701225037-10065\" not found"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.657874 5509 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.685617 5509 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.726373 5509 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220701225037-10065"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.726476 5509 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220701225037-10065"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.799697 5509 apiserver.go:52] "Watching apiserver"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.806286 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.808794 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905427 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4427a6a7-009f-4357-8c8a-fedbba15c52e-lib-modules\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905474 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpdl\" (UniqueName: \"kubernetes.io/projected/4427a6a7-009f-4357-8c8a-fedbba15c52e-kube-api-access-4qpdl\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905500 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4427a6a7-009f-4357-8c8a-fedbba15c52e-xtables-lock\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:33 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:33.905538 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4427a6a7-009f-4357-8c8a-fedbba15c52e-kube-proxy\") pod \"kube-proxy-2rj2j\" (UID: \"4427a6a7-009f-4357-8c8a-fedbba15c52e\") " pod="kube-system/kube-proxy-2rj2j"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006270 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4-config-volume\") pod \"coredns-6d4b75cb6d-9hr6m\" (UID: \"213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4\") " pod="kube-system/coredns-6d4b75cb6d-9hr6m"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006552 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjh4\" (UniqueName: \"kubernetes.io/projected/213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4-kube-api-access-hgjh4\") pod \"coredns-6d4b75cb6d-9hr6m\" (UID: \"213c07e1-cfd7-4fa5-88d2-f2f672e1b4d4\") " pod="kube-system/coredns-6d4b75cb6d-9hr6m"
Jul 01 22:52:34 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:34.006598 5509 reconciler.go:157] "Reconciler: start to sync state"
Jul 01 22:52:36 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:36.761091 5509 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="196d6896c1e45fc0dcd7b5ebc721dd381181b03be239a4bbd01b09c22e258cc1"
Jul 01 22:52:36 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:36.982885 5509 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3b9491065a997758567c4ce62b106caef6a5ad7c61704da54b5ec635e24000a5"
Jul 01 22:52:39 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:39.032005 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:40 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:40.037454 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:42 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:42.266775 5509 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.446424 5509 topology_manager.go:200] "Topology Admit Handler"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.631797 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/54985022-a6cd-4c59-af65-805d97e94819-tmp\") pod \"storage-provisioner\" (UID: \"54985022-a6cd-4c59-af65-805d97e94819\") " pod="kube-system/storage-provisioner"
Jul 01 22:52:50 pause-20220701225037-10065 kubelet[5509]: I0701 22:52:50.631860 5509 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9lq\" (UniqueName: \"kubernetes.io/projected/54985022-a6cd-4c59-af65-805d97e94819-kube-api-access-jp9lq\") pod \"storage-provisioner\" (UID: \"54985022-a6cd-4c59-af65-805d97e94819\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [bc8eb3b85df9] <==
* I0701 22:52:51.067643 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0701 22:52:51.076653 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0701 22:52:51.076696 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0701 22:52:51.091071 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0701 22:52:51.091149 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e9623bf-a8d3-4559-bdf3-e0cb6f256a1f", APIVersion:"v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35 became leader
I0701 22:52:51.091195 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35!
I0701 22:52:51.191919 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220701225037-10065_564a2dc6-f764-49dc-a377-99355d55ef35!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220701225037-10065 -n pause-20220701225037-10065
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220701225037-10065 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220701225037-10065 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220701225037-10065 describe pod : exit status 1 (51.911323ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220701225037-10065 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.81s)