=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-20220728205408-9843 --alsologtostderr -v=1 --driver=docker --container-runtime=docker
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220728205408-9843 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (51.164627252s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20220728205408-9843] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14555
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on existing profile
* Starting control plane node pause-20220728205408-9843 in cluster pause-20220728205408-9843
* Pulling base image ...
* Updating the running docker "pause-20220728205408-9843" container ...
* Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-20220728205408-9843" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0728 20:55:06.549680 228788 out.go:296] Setting OutFile to fd 1 ...
I0728 20:55:06.549805 228788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:06.549813 228788 out.go:309] Setting ErrFile to fd 2...
I0728 20:55:06.549819 228788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:06.549958 228788 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 20:55:06.550647 228788 out.go:303] Setting JSON to false
I0728 20:55:06.553160 228788 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2258,"bootTime":1659039449,"procs":1161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 20:55:06.553242 228788 start.go:125] virtualization: kvm guest
I0728 20:55:06.555968 228788 out.go:177] * [pause-20220728205408-9843] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 20:55:06.557924 228788 notify.go:193] Checking for updates...
I0728 20:55:06.559304 228788 out.go:177] - MINIKUBE_LOCATION=14555
I0728 20:55:06.560698 228788 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 20:55:06.562128 228788 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:55:06.563657 228788 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 20:55:06.565120 228788 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 20:55:06.567062 228788 config.go:178] Loaded profile config "pause-20220728205408-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:06.567673 228788 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 20:55:06.624866 228788 docker.go:137] docker version: linux-20.10.17
I0728 20:55:06.624988 228788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:06.827064 228788 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:62 SystemTime:2022-07-28 20:55:06.682038733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:06.827207 228788 docker.go:254] overlay module found
I0728 20:55:06.828996 228788 out.go:177] * Using the docker driver based on existing profile
I0728 20:55:06.830329 228788 start.go:284] selected driver: docker
I0728 20:55:06.830344 228788 start.go:808] validating driver "docker" against &{Name:pause-20220728205408-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728205408-9843 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:55:06.830466 228788 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 20:55:06.830545 228788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:06.979626 228788 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:62 SystemTime:2022-07-28 20:55:06.873296146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:06.980632 228788 cni.go:95] Creating CNI manager for ""
I0728 20:55:06.980654 228788 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 20:55:06.980669 228788 start_flags.go:310] config:
{Name:pause-20220728205408-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728205408-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:55:06.983574 228788 out.go:177] * Starting control plane node pause-20220728205408-9843 in cluster pause-20220728205408-9843
I0728 20:55:06.984938 228788 cache.go:120] Beginning downloading kic base image for docker with docker
I0728 20:55:06.986321 228788 out.go:177] * Pulling base image ...
I0728 20:55:06.987633 228788 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 20:55:06.987658 228788 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
I0728 20:55:06.987677 228788 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
I0728 20:55:06.987697 228788 cache.go:57] Caching tarball of preloaded images
I0728 20:55:06.987952 228788 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 20:55:06.987978 228788 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker
I0728 20:55:06.988174 228788 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/config.json ...
I0728 20:55:07.034613 228788 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
I0728 20:55:07.034642 228788 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
I0728 20:55:07.034669 228788 cache.go:208] Successfully downloaded all kic artifacts
I0728 20:55:07.034716 228788 start.go:370] acquiring machines lock for pause-20220728205408-9843: {Name:mk37c10d3d4e09f44e572c8debf27104b571c658 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 20:55:07.034844 228788 start.go:374] acquired machines lock for "pause-20220728205408-9843" in 84.839µs
I0728 20:55:07.034870 228788 start.go:95] Skipping create...Using existing machine configuration
I0728 20:55:07.034876 228788 fix.go:55] fixHost starting:
I0728 20:55:07.035173 228788 cli_runner.go:164] Run: docker container inspect pause-20220728205408-9843 --format={{.State.Status}}
I0728 20:55:07.077405 228788 fix.go:103] recreateIfNeeded on pause-20220728205408-9843: state=Running err=<nil>
W0728 20:55:07.077439 228788 fix.go:129] unexpected machine state, will restart: <nil>
I0728 20:55:07.122056 228788 out.go:177] * Updating the running docker "pause-20220728205408-9843" container ...
I0728 20:55:07.124206 228788 machine.go:88] provisioning docker machine ...
I0728 20:55:07.124256 228788 ubuntu.go:169] provisioning hostname "pause-20220728205408-9843"
I0728 20:55:07.124324 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:07.182426 228788 main.go:134] libmachine: Using SSH client type: native
I0728 20:55:07.182625 228788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:55:07.182648 228788 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220728205408-9843 && echo "pause-20220728205408-9843" | sudo tee /etc/hostname
I0728 20:55:07.350125 228788 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220728205408-9843
I0728 20:55:07.350201 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:07.400286 228788 main.go:134] libmachine: Using SSH client type: native
I0728 20:55:07.400430 228788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:55:07.400451 228788 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220728205408-9843' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220728205408-9843/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220728205408-9843' | sudo tee -a /etc/hosts;
fi
fi
I0728 20:55:07.544371 228788 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0728 20:55:07.544413 228788 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
I0728 20:55:07.544469 228788 ubuntu.go:177] setting up certificates
I0728 20:55:07.544480 228788 provision.go:83] configureAuth start
I0728 20:55:07.544538 228788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220728205408-9843
I0728 20:55:07.591334 228788 provision.go:138] copyHostCerts
I0728 20:55:07.591401 228788 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
I0728 20:55:07.591418 228788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
I0728 20:55:07.591608 228788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
I0728 20:55:07.591745 228788 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
I0728 20:55:07.591765 228788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
I0728 20:55:07.591810 228788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
I0728 20:55:07.591902 228788 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
I0728 20:55:07.591917 228788 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
I0728 20:55:07.591956 228788 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1675 bytes)
I0728 20:55:07.592031 228788 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.pause-20220728205408-9843 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220728205408-9843]
I0728 20:55:07.851200 228788 provision.go:172] copyRemoteCerts
I0728 20:55:07.851266 228788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0728 20:55:07.851326 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:07.900148 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:07.995063 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0728 20:55:08.013165 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0728 20:55:08.037563 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0728 20:55:08.062273 228788 provision.go:86] duration metric: configureAuth took 517.780992ms
I0728 20:55:08.062295 228788 ubuntu.go:193] setting minikube options for container-runtime
I0728 20:55:08.062478 228788 config.go:178] Loaded profile config "pause-20220728205408-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:08.062542 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:08.106202 228788 main.go:134] libmachine: Using SSH client type: native
I0728 20:55:08.106490 228788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:55:08.106522 228788 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0728 20:55:08.245427 228788 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0728 20:55:08.245452 228788 ubuntu.go:71] root file system type: overlay
I0728 20:55:08.245636 228788 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0728 20:55:08.245701 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:08.288583 228788 main.go:134] libmachine: Using SSH client type: native
I0728 20:55:08.288756 228788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:55:08.288863 228788 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0728 20:55:08.430416 228788 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0728 20:55:08.430497 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:08.478270 228788 main.go:134] libmachine: Using SSH client type: native
I0728 20:55:08.478468 228788 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7daec0] 0x7ddf20 <nil> [] 0s} 127.0.0.1 49337 <nil> <nil>}
I0728 20:55:08.478506 228788 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0728 20:55:08.609548 228788 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0728 20:55:08.609587 228788 machine.go:91] provisioned docker machine in 1.485344057s
I0728 20:55:08.609597 228788 start.go:307] post-start starting for "pause-20220728205408-9843" (driver="docker")
I0728 20:55:08.609605 228788 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0728 20:55:08.609667 228788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0728 20:55:08.609721 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:08.683552 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:08.785576 228788 ssh_runner.go:195] Run: cat /etc/os-release
I0728 20:55:08.789366 228788 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0728 20:55:08.789397 228788 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0728 20:55:08.789410 228788 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0728 20:55:08.789417 228788 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0728 20:55:08.789437 228788 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
I0728 20:55:08.789492 228788 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
I0728 20:55:08.789592 228788 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98432.pem -> 98432.pem in /etc/ssl/certs
I0728 20:55:08.789731 228788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0728 20:55:08.808699 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98432.pem --> /etc/ssl/certs/98432.pem (1708 bytes)
I0728 20:55:08.830208 228788 start.go:310] post-start completed in 220.595873ms
I0728 20:55:08.830279 228788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0728 20:55:08.830328 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:08.876300 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:08.970045 228788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0728 20:55:08.976578 228788 fix.go:57] fixHost completed within 1.941695416s
I0728 20:55:08.976601 228788 start.go:82] releasing machines lock for "pause-20220728205408-9843", held for 1.941741131s
I0728 20:55:08.976682 228788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220728205408-9843
I0728 20:55:09.011838 228788 ssh_runner.go:195] Run: systemctl --version
I0728 20:55:09.011891 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:09.011925 228788 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0728 20:55:09.011984 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:09.051340 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:09.054442 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:09.157624 228788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0728 20:55:09.168041 228788 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0728 20:55:09.168093 228788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0728 20:55:09.182466 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0728 20:55:09.196564 228788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0728 20:55:09.312258 228788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0728 20:55:09.417160 228788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0728 20:55:09.525190 228788 ssh_runner.go:195] Run: sudo systemctl restart docker
I0728 20:55:25.684698 228788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.159474058s)
I0728 20:55:25.684768 228788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0728 20:55:25.925241 228788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0728 20:55:26.126391 228788 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0728 20:55:26.142651 228788 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0728 20:55:26.142721 228788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0728 20:55:26.147821 228788 start.go:471] Will wait 60s for crictl version
I0728 20:55:26.147886 228788 ssh_runner.go:195] Run: sudo crictl version
I0728 20:55:26.220685 228788 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0728 20:55:26.220753 228788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0728 20:55:26.278374 228788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0728 20:55:26.447644 228788 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
I0728 20:55:26.447782 228788 cli_runner.go:164] Run: docker network inspect pause-20220728205408-9843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0728 20:55:26.500806 228788 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0728 20:55:26.505800 228788 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 20:55:26.505872 228788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0728 20:55:26.566643 228788 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0728 20:55:26.588406 228788 docker.go:542] Images already preloaded, skipping extraction
I0728 20:55:26.588483 228788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0728 20:55:26.635252 228788 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0728 20:55:26.635274 228788 cache_images.go:84] Images are preloaded, skipping loading
I0728 20:55:26.635322 228788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0728 20:55:26.729574 228788 cni.go:95] Creating CNI manager for ""
I0728 20:55:26.729602 228788 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 20:55:26.729621 228788 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0728 20:55:26.729638 228788 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220728205408-9843 NodeName:pause-20220728205408-9843 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/min
ikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0728 20:55:26.729811 228788 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220728205408-9843"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0728 20:55:26.729912 228788 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220728205408-9843 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.3 ClusterName:pause-20220728205408-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0728 20:55:26.729965 228788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
I0728 20:55:26.749419 228788 binaries.go:44] Found k8s binaries, skipping transfer
I0728 20:55:26.749492 228788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0728 20:55:26.757769 228788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
I0728 20:55:26.820576 228788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0728 20:55:26.839029 228788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
I0728 20:55:26.857287 228788 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0728 20:55:26.861207 228788 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843 for IP: 192.168.67.2
I0728 20:55:26.861317 228788 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
I0728 20:55:26.861364 228788 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
I0728 20:55:26.861456 228788 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/client.key
I0728 20:55:26.861539 228788 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/apiserver.key.c7fa3a9e
I0728 20:55:26.861598 228788 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/proxy-client.key
I0728 20:55:26.861734 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9843.pem (1338 bytes)
W0728 20:55:26.861777 228788 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9843_empty.pem, impossibly tiny 0 bytes
I0728 20:55:26.861802 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1675 bytes)
I0728 20:55:26.861847 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
I0728 20:55:26.861887 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
I0728 20:55:26.861926 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1675 bytes)
I0728 20:55:26.861997 228788 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98432.pem (1708 bytes)
I0728 20:55:26.862780 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0728 20:55:26.886562 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0728 20:55:26.912741 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0728 20:55:26.950752 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0728 20:55:27.104913 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0728 20:55:27.130011 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0728 20:55:27.158668 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0728 20:55:27.184820 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0728 20:55:27.242361 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/9843.pem --> /usr/share/ca-certificates/9843.pem (1338 bytes)
I0728 20:55:27.318290 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/98432.pem --> /usr/share/ca-certificates/98432.pem (1708 bytes)
I0728 20:55:27.352769 228788 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0728 20:55:27.371643 228788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0728 20:55:27.384325 228788 ssh_runner.go:195] Run: openssl version
I0728 20:55:27.389228 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9843.pem && ln -fs /usr/share/ca-certificates/9843.pem /etc/ssl/certs/9843.pem"
I0728 20:55:27.396642 228788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9843.pem
I0728 20:55:27.399769 228788 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 20:30 /usr/share/ca-certificates/9843.pem
I0728 20:55:27.399818 228788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9843.pem
I0728 20:55:27.404551 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9843.pem /etc/ssl/certs/51391683.0"
I0728 20:55:27.411304 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98432.pem && ln -fs /usr/share/ca-certificates/98432.pem /etc/ssl/certs/98432.pem"
I0728 20:55:27.418548 228788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98432.pem
I0728 20:55:27.421587 228788 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 20:30 /usr/share/ca-certificates/98432.pem
I0728 20:55:27.421633 228788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98432.pem
I0728 20:55:27.426376 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98432.pem /etc/ssl/certs/3ec20f2e.0"
I0728 20:55:27.432922 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0728 20:55:27.440160 228788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0728 20:55:27.443047 228788 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 20:26 /usr/share/ca-certificates/minikubeCA.pem
I0728 20:55:27.443095 228788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0728 20:55:27.447804 228788 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0728 20:55:27.454484 228788 kubeadm.go:395] StartCluster: {Name:pause-20220728205408-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728205408-9843 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:55:27.454620 228788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0728 20:55:27.488910 228788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0728 20:55:27.496274 228788 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0728 20:55:27.496297 228788 kubeadm.go:626] restartCluster start
I0728 20:55:27.496342 228788 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0728 20:55:27.503050 228788 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0728 20:55:27.503755 228788 kubeconfig.go:92] found "pause-20220728205408-9843" server: "https://192.168.67.2:8443"
I0728 20:55:27.504324 228788 kapi.go:59] client config for pause-20220728205408-9843: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173e480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0728 20:55:27.504831 228788 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0728 20:55:27.511522 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:27.511573 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:27.519561 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:27.719997 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:27.720086 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:27.729985 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:27.920245 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:27.920341 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:27.929368 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:28.120650 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:28.120716 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:28.130112 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:28.320467 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:28.320539 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:28.329564 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:28.519854 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:28.519929 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:28.529463 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:28.720666 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:28.720771 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:28.729590 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:28.919799 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:28.919866 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:28.928943 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:29.120286 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:29.120351 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:29.129345 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:29.320686 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:29.320777 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:29.329544 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:29.519730 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:29.519807 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:29.527772 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:29.720081 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:29.720157 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:29.729047 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:29.920372 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:29.920436 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:29.929147 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:30.120459 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:30.120520 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:30.129623 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:30.319916 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:30.320000 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:30.328983 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:30.520303 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:30.520382 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:30.529324 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:30.529340 228788 api_server.go:165] Checking apiserver status ...
I0728 20:55:30.529370 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0728 20:55:30.537215 228788 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0728 20:55:30.537236 228788 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0728 20:55:30.537242 228788 kubeadm.go:1092] stopping kube-system containers ...
I0728 20:55:30.537281 228788 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0728 20:55:30.569983 228788 docker.go:443] Stopping containers: [935c125c4397 2ecf26cec7af 6332eed4991d d508b7234c30 bfbc7b0761b0 cf346db4ac95 eb1a00c43d2d 8d0ece39f028 359b6e959cda 8825b8a486cb ff4527bd1426 7fca930a27f7 a9e0b3140400 421cfb978927 a7ad70d1f49e e882b363f4b0 031a13e741ff fe858fd77312 a303f9e2cba1 ac39d584aee0 0e11a2a1e87d 9d0b922b27b5]
I0728 20:55:30.570050 228788 ssh_runner.go:195] Run: docker stop 935c125c4397 2ecf26cec7af 6332eed4991d d508b7234c30 bfbc7b0761b0 cf346db4ac95 eb1a00c43d2d 8d0ece39f028 359b6e959cda 8825b8a486cb ff4527bd1426 7fca930a27f7 a9e0b3140400 421cfb978927 a7ad70d1f49e e882b363f4b0 031a13e741ff fe858fd77312 a303f9e2cba1 ac39d584aee0 0e11a2a1e87d 9d0b922b27b5
I0728 20:55:32.077080 228788 ssh_runner.go:235] Completed: docker stop 935c125c4397 2ecf26cec7af 6332eed4991d d508b7234c30 bfbc7b0761b0 cf346db4ac95 eb1a00c43d2d 8d0ece39f028 359b6e959cda 8825b8a486cb ff4527bd1426 7fca930a27f7 a9e0b3140400 421cfb978927 a7ad70d1f49e e882b363f4b0 031a13e741ff fe858fd77312 a303f9e2cba1 ac39d584aee0 0e11a2a1e87d 9d0b922b27b5: (1.506992145s)
W0728 20:55:32.077163 228788 kubeadm.go:679] Failed to stop kube-system containers: port conflicts may arise: stop: docker: docker stop 935c125c4397 2ecf26cec7af 6332eed4991d d508b7234c30 bfbc7b0761b0 cf346db4ac95 eb1a00c43d2d 8d0ece39f028 359b6e959cda 8825b8a486cb ff4527bd1426 7fca930a27f7 a9e0b3140400 421cfb978927 a7ad70d1f49e e882b363f4b0 031a13e741ff fe858fd77312 a303f9e2cba1 ac39d584aee0 0e11a2a1e87d 9d0b922b27b5: Process exited with status 1
stdout:
2ecf26cec7af
6332eed4991d
d508b7234c30
bfbc7b0761b0
cf346db4ac95
eb1a00c43d2d
8d0ece39f028
359b6e959cda
8825b8a486cb
ff4527bd1426
7fca930a27f7
a9e0b3140400
421cfb978927
a7ad70d1f49e
e882b363f4b0
031a13e741ff
fe858fd77312
a303f9e2cba1
ac39d584aee0
0e11a2a1e87d
9d0b922b27b5
stderr:
Error response from daemon: No such container: 935c125c4397
I0728 20:55:32.077223 228788 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0728 20:55:32.190973 228788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0728 20:55:32.199520 228788 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jul 28 20:54 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Jul 28 20:54 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2039 Jul 28 20:54 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Jul 28 20:54 /etc/kubernetes/scheduler.conf
I0728 20:55:32.199578 228788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0728 20:55:32.207804 228788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0728 20:55:32.215039 228788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0728 20:55:32.222476 228788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0728 20:55:32.222527 228788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0728 20:55:32.229791 228788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0728 20:55:32.237694 228788 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0728 20:55:32.237744 228788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0728 20:55:32.244868 228788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0728 20:55:32.253004 228788 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0728 20:55:32.253024 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:32.302844 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:33.099825 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:33.352846 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:33.417801 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:33.530248 228788 api_server.go:51] waiting for apiserver process to appear ...
I0728 20:55:33.530308 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:34.048457 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:34.546211 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:34.561497 228788 api_server.go:71] duration metric: took 1.031248699s to wait for apiserver process to appear ...
I0728 20:55:34.561529 228788 api_server.go:87] waiting for apiserver healthz status ...
I0728 20:55:34.561542 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:34.561846 228788 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0728 20:55:35.062692 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:38.588907 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0728 20:55:38.588936 228788 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0728 20:55:39.062654 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:39.067742 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 20:55:39.067769 228788 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 20:55:39.562281 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:39.643856 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 20:55:39.643899 228788 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 20:55:40.062469 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:40.234822 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 20:55:40.234861 228788 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 20:55:40.562184 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:40.614586 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0728 20:55:40.614620 228788 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0728 20:55:41.062890 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:41.069409 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0728 20:55:41.077624 228788 api_server.go:140] control plane version: v1.24.3
I0728 20:55:41.077652 228788 api_server.go:130] duration metric: took 6.516116663s to wait for apiserver health ...
I0728 20:55:41.077661 228788 cni.go:95] Creating CNI manager for ""
I0728 20:55:41.077667 228788 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 20:55:41.077677 228788 system_pods.go:43] waiting for kube-system pods to appear ...
I0728 20:55:41.088031 228788 system_pods.go:59] 6 kube-system pods found
I0728 20:55:41.088063 228788 system_pods.go:61] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0728 20:55:41.088072 228788 system_pods.go:61] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:41.088116 228788 system_pods.go:61] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0728 20:55:41.088137 228788 system_pods.go:61] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0728 20:55:41.088151 228788 system_pods.go:61] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:41.088162 228788 system_pods.go:61] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:41.088173 228788 system_pods.go:74] duration metric: took 10.490998ms to wait for pod list to return data ...
I0728 20:55:41.088180 228788 node_conditions.go:102] verifying NodePressure condition ...
I0728 20:55:41.114059 228788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0728 20:55:41.114103 228788 node_conditions.go:123] node cpu capacity is 8
I0728 20:55:41.114117 228788 node_conditions.go:105] duration metric: took 25.928554ms to run NodePressure ...
I0728 20:55:41.114138 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0728 20:55:41.382653 228788 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0728 20:55:41.388800 228788 kubeadm.go:777] kubelet initialised
I0728 20:55:41.388828 228788 kubeadm.go:778] duration metric: took 6.148316ms waiting for restarted kubelet to initialise ...
I0728 20:55:41.388838 228788 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:41.434359 228788 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace to be "Ready" ...
I0728 20:55:43.448133 228788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace has status "Ready":"False"
I0728 20:55:45.946945 228788 pod_ready.go:102] pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace has status "Ready":"False"
I0728 20:55:46.946855 228788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:46.946880 228788 pod_ready.go:81] duration metric: took 5.512490441s waiting for pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace to be "Ready" ...
I0728 20:55:46.946889 228788 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:48.958021 228788 pod_ready.go:102] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"False"
I0728 20:55:51.457319 228788 pod_ready.go:102] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"False"
I0728 20:55:53.957467 228788 pod_ready.go:92] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:53.957494 228788 pod_ready.go:81] duration metric: took 7.010599188s waiting for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.957504 228788 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.961627 228788 pod_ready.go:92] pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:53.961648 228788 pod_ready.go:81] duration metric: took 4.137402ms waiting for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.961661 228788 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.965773 228788 pod_ready.go:92] pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:53.965787 228788 pod_ready.go:81] duration metric: took 4.118626ms waiting for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.965796 228788 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.969591 228788 pod_ready.go:92] pod "kube-proxy-bgdg9" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:53.969608 228788 pod_ready.go:81] duration metric: took 3.807086ms waiting for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.969615 228788 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.973057 228788 pod_ready.go:92] pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:53.973073 228788 pod_ready.go:81] duration metric: took 3.452583ms waiting for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:53.973079 228788 pod_ready.go:38] duration metric: took 12.584233081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:53.973094 228788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0728 20:55:53.980566 228788 ops.go:34] apiserver oom_adj: -16
I0728 20:55:53.980580 228788 kubeadm.go:630] restartCluster took 26.48427772s
I0728 20:55:53.980586 228788 kubeadm.go:397] StartCluster complete in 26.526110653s
I0728 20:55:53.980600 228788 settings.go:142] acquiring lock: {Name:mkde5d8a963775babd29807d073305db0e207bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 20:55:53.980700 228788 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:55:53.981708 228788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mkb4995ca3f52b12590ce75ae77ac748b457a292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 20:55:53.982565 228788 kapi.go:59] client config for pause-20220728205408-9843: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173e480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0728 20:55:53.984504 228788 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220728205408-9843" rescaled to 1
I0728 20:55:53.984551 228788 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0728 20:55:53.986590 228788 out.go:177] * Verifying Kubernetes components...
I0728 20:55:53.984584 228788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0728 20:55:53.984647 228788 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0728 20:55:53.986721 228788 addons.go:65] Setting storage-provisioner=true in profile "pause-20220728205408-9843"
I0728 20:55:53.986744 228788 addons.go:153] Setting addon storage-provisioner=true in "pause-20220728205408-9843"
W0728 20:55:53.986753 228788 addons.go:162] addon storage-provisioner should already be in state true
I0728 20:55:53.984757 228788 config.go:178] Loaded profile config "pause-20220728205408-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:53.988198 228788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 20:55:53.986775 228788 addons.go:65] Setting default-storageclass=true in profile "pause-20220728205408-9843"
I0728 20:55:53.988280 228788 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220728205408-9843"
I0728 20:55:53.986820 228788 host.go:66] Checking if "pause-20220728205408-9843" exists ...
I0728 20:55:53.988590 228788 cli_runner.go:164] Run: docker container inspect pause-20220728205408-9843 --format={{.State.Status}}
I0728 20:55:53.988772 228788 cli_runner.go:164] Run: docker container inspect pause-20220728205408-9843 --format={{.State.Status}}
I0728 20:55:54.030083 228788 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0728 20:55:54.031570 228788 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0728 20:55:54.030752 228788 kapi.go:59] client config for pause-20220728205408-9843: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728205408-9843
/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173e480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0728 20:55:54.031594 228788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0728 20:55:54.031763 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:54.035865 228788 addons.go:153] Setting addon default-storageclass=true in "pause-20220728205408-9843"
W0728 20:55:54.035897 228788 addons.go:162] addon default-storageclass should already be in state true
I0728 20:55:54.035930 228788 host.go:66] Checking if "pause-20220728205408-9843" exists ...
I0728 20:55:54.036450 228788 cli_runner.go:164] Run: docker container inspect pause-20220728205408-9843 --format={{.State.Status}}
I0728 20:55:54.062751 228788 node_ready.go:35] waiting up to 6m0s for node "pause-20220728205408-9843" to be "Ready" ...
I0728 20:55:54.062796 228788 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0728 20:55:54.074510 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:54.082029 228788 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0728 20:55:54.082049 228788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0728 20:55:54.082101 228788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728205408-9843
I0728 20:55:54.119436 228788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49337 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728205408-9843/id_rsa Username:docker}
I0728 20:55:54.157275 228788 node_ready.go:49] node "pause-20220728205408-9843" has status "Ready":"True"
I0728 20:55:54.157298 228788 node_ready.go:38] duration metric: took 94.514408ms waiting for node "pause-20220728205408-9843" to be "Ready" ...
I0728 20:55:54.157306 228788 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:54.170362 228788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0728 20:55:54.213545 228788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0728 20:55:54.357501 228788 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace to be "Ready" ...
I0728 20:55:54.755794 228788 pod_ready.go:92] pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:54.755818 228788 pod_ready.go:81] duration metric: took 398.289531ms waiting for pod "coredns-6d4b75cb6d-8z2tg" in "kube-system" namespace to be "Ready" ...
I0728 20:55:54.755833 228788 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:54.846531 228788 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0728 20:55:54.847824 228788 addons.go:414] enableAddons completed in 863.208744ms
I0728 20:55:55.155995 228788 pod_ready.go:92] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.156017 228788 pod_ready.go:81] duration metric: took 400.176809ms waiting for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.156026 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556032 228788 pod_ready.go:92] pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.556067 228788 pod_ready.go:81] duration metric: took 400.032237ms waiting for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556082 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956043 228788 pod_ready.go:92] pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.956069 228788 pod_ready.go:81] duration metric: took 399.978224ms waiting for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956081 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355801 228788 pod_ready.go:92] pod "kube-proxy-bgdg9" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.355824 228788 pod_ready.go:81] duration metric: took 399.733762ms waiting for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355836 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755604 228788 pod_ready.go:92] pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.755626 228788 pod_ready.go:81] duration metric: took 399.78195ms waiting for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755635 228788 pod_ready.go:38] duration metric: took 2.598319546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:56.755657 228788 api_server.go:51] waiting for apiserver process to appear ...
I0728 20:55:56.755703 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:56.765960 228788 api_server.go:71] duration metric: took 2.78138638s to wait for apiserver process to appear ...
I0728 20:55:56.765985 228788 api_server.go:87] waiting for apiserver healthz status ...
I0728 20:55:56.765999 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:56.771391 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0728 20:55:56.772200 228788 api_server.go:140] control plane version: v1.24.3
I0728 20:55:56.772218 228788 api_server.go:130] duration metric: took 6.226542ms to wait for apiserver health ...
I0728 20:55:56.772230 228788 system_pods.go:43] waiting for kube-system pods to appear ...
I0728 20:55:56.958841 228788 system_pods.go:59] 7 kube-system pods found
I0728 20:55:56.958878 228788 system_pods.go:61] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:56.958885 228788 system_pods.go:61] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:56.958893 228788 system_pods.go:61] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:56.958900 228788 system_pods.go:61] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:56.958906 228788 system_pods.go:61] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:56.958913 228788 system_pods.go:61] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:56.958920 228788 system_pods.go:61] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:56.958926 228788 system_pods.go:74] duration metric: took 186.690368ms to wait for pod list to return data ...
I0728 20:55:56.958936 228788 default_sa.go:34] waiting for default service account to be created ...
I0728 20:55:57.157423 228788 default_sa.go:45] found service account: "default"
I0728 20:55:57.157453 228788 default_sa.go:55] duration metric: took 198.506077ms for default service account to be created ...
I0728 20:55:57.157463 228788 system_pods.go:116] waiting for k8s-apps to be running ...
I0728 20:55:57.359189 228788 system_pods.go:86] 7 kube-system pods found
I0728 20:55:57.359220 228788 system_pods.go:89] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:57.359228 228788 system_pods.go:89] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:57.359235 228788 system_pods.go:89] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:57.359241 228788 system_pods.go:89] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:57.359248 228788 system_pods.go:89] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:57.359254 228788 system_pods.go:89] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:57.359260 228788 system_pods.go:89] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:57.359269 228788 system_pods.go:126] duration metric: took 201.799524ms to wait for k8s-apps to be running ...
I0728 20:55:57.359278 228788 system_svc.go:44] waiting for kubelet service to be running ....
I0728 20:55:57.359322 228788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 20:55:57.371297 228788 system_svc.go:56] duration metric: took 12.014273ms WaitForService to wait for kubelet.
I0728 20:55:57.371319 228788 kubeadm.go:572] duration metric: took 3.386749704s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0728 20:55:57.371336 228788 node_conditions.go:102] verifying NodePressure condition ...
I0728 20:55:57.555946 228788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0728 20:55:57.555973 228788 node_conditions.go:123] node cpu capacity is 8
I0728 20:55:57.555986 228788 node_conditions.go:105] duration metric: took 184.644433ms to run NodePressure ...
I0728 20:55:57.556000 228788 start.go:216] waiting for startup goroutines ...
I0728 20:55:57.600867 228788 start.go:506] kubectl: 1.24.3, cluster: 1.24.3 (minor skew: 0)
I0728 20:55:57.603232 228788 out.go:177] * Done! kubectl is now configured to use "pause-20220728205408-9843" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-20220728205408-9843
helpers_test.go:235: (dbg) docker inspect pause-20220728205408-9843:
-- stdout --
[
{
"Id": "3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6",
"Created": "2022-07-28T20:54:18.261610669Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 212639,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-28T20:54:18.951427486Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
"ResolvConfPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/hostname",
"HostsPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/hosts",
"LogPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6-json.log",
"Name": "/pause-20220728205408-9843",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20220728205408-9843:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20220728205408-9843",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2-init/diff:/var/lib/docker/overlay2/4199c6556cea42181cceb9906f820822443cb0b5f7ea328eca775adfd8d1708e/diff:/var/lib/docker/overlay2/b24dbffe8a02c42d5181cc376e50868eb2daf54ff07b615d6650ec5b797535fd/diff:/var/lib/docker/overlay2/660766159b0d261999d298e45910b16a8946daaf2bf143c351a5952f21e4ce59/diff:/var/lib/docker/overlay2/67889e79cf7ba13abbae4a874d98b70aa2870c90f7da1e9b916698f4d0d5a13d/diff:/var/lib/docker/overlay2/3134c941b14d494841493f2137121da94dc3191e293f397a832ccac047a6f9c5/diff:/var/lib/docker/overlay2/ba80490bd9be017b8e9ce98fe920b11a99bb56a4f613da7ea777e8e851200451/diff:/var/lib/docker/overlay2/d301c74cb5339bc9bc007153c8b68524f72892fecb626edc1dea29337889ee67/diff:/var/lib/docker/overlay2/b3adebbc2adf56d5de785c0fa4b2127a191eae5274ad6161012f59e1e8136eaa/diff:/var/lib/docker/overlay2/4cdbbd63ba765f5ed0d2515ec5a80ee18ca61d7200dc66dcd3e6829397bd15a6/diff:/var/lib/docker/overlay2/2441c7
ad19ee32fbf2a557b7a3303aa6670350555c7e57e8228795df81c39274/diff:/var/lib/docker/overlay2/fb5a05c8be9267430588e13bc96d29bf5eba6cfe65dffdd53b5c68a683371f07/diff:/var/lib/docker/overlay2/f5e54dab2e893d70cd9adea1081cb9d970e6fb6673ebb766c8c43f965e8186e8/diff:/var/lib/docker/overlay2/3ccbd571ef95f280a051b35e3f349d19e0d171bdc863fd1622a70ccc196855b6/diff:/var/lib/docker/overlay2/fabef18c4c7604f7510412b2942fc2af70e76c7f80369d546f330590988ed056/diff:/var/lib/docker/overlay2/dcab26aa57a8ce8a859c2adbc1adb22ef65acdc1a83ebc6b18a60c7a3cb3dd8f/diff:/var/lib/docker/overlay2/13603af21440f26866999875a26f9e88ec243a0101a9cda64d429141bd045fe7/diff:/var/lib/docker/overlay2/67d9e222c9fb201942639f613e92f686d758195118b7a10c49b882833f665253/diff:/var/lib/docker/overlay2/ebe921e8d0699dadbcaa5a4f92eb5b5c7aa68df1e9acf6ce0fc6b0161834cce3/diff:/var/lib/docker/overlay2/35a5973a506a54dbb275f743e439497153e9b7593061b61cecc318dde62545fb/diff:/var/lib/docker/overlay2/701264336a77431a21c1b6efed245aa3629feb70775affd7f67ffffa391a026c/diff:/var/lib/d
ocker/overlay2/8b6aa1b091bb5f97e231465ecd79889b8962ff7cc7b7a1956c8a434e2f44b664/diff:/var/lib/docker/overlay2/52e4531bcb1cdcae7ebcbcbc8239139b29a4a7ba9031c55b597f916f9802c576/diff:/var/lib/docker/overlay2/8956f31605b88ccd217e895de5f2163b2a86ee944bd4805c5bdf3fe58b782e92/diff:/var/lib/docker/overlay2/3a10bd0e6ad21b576c4b0e30403206fbd25df5a4f315cd84456a7f40e75f50cd/diff:/var/lib/docker/overlay2/f6a9cb2902dbdf745c3df96bdc9e81f352e27221eed3b83aa3a6fa02dc18d4d5/diff:/var/lib/docker/overlay2/4f12f2c1a00a4dc134d7f6e835417ec23ea905b5a638fdfd5f82d20997965ec3/diff:/var/lib/docker/overlay2/98865ed752433b8b59e76055af6cae52e491a81b0c202e53aa7671e79f706b13/diff:/var/lib/docker/overlay2/ae14114745b79ff18616d57b4a38e2c8781f15de8717176527f6c4ae0de5f8a7/diff:/var/lib/docker/overlay2/75bf9540aad128f56a4431eec06aa8215df35dc327c7531a20a38b31461bc43b/diff:/var/lib/docker/overlay2/26471df49233e2f02186ae90a4b4adad63dcce28b56378d3feaad1ac8da0a6f8/diff:/var/lib/docker/overlay2/475fcc49601486c8df0f1e0ccb755565cee6d9a3649121772fc715bdf9c
748f8/diff:/var/lib/docker/overlay2/ee157a254a59f89ceea14fd14a197d0966e2e812063c3a491b87f19c4d67750b/diff:/var/lib/docker/overlay2/fa44cebeeaea1c19fcd1d09cc224ba414119c57cd51541b89aa402fcc1466fd7/diff:/var/lib/docker/overlay2/1e2d8d61412a316e76385403d91cbb4a6e26362794eed86be6a433dc3eb5794a/diff:/var/lib/docker/overlay2/73f555b30601b74f8524bed7819729e89228457af1e276f81826c636105fe3b7/diff:/var/lib/docker/overlay2/b52b80028134836783b3a48186aa135cb8753d31f7c8caf150aa4cdb4103e90d/diff:/var/lib/docker/overlay2/8da3d321a6e82f660eb09cd89bdbf24366d82517326dec73ba25c01ff4c2f266/diff:/var/lib/docker/overlay2/4f1a38dc276744ded9fa33010b170d716f2fbe0724cc13ec8f8472d3d9f194d6/diff:/var/lib/docker/overlay2/7159ebfec82182edf221de518b1790661ad26171e64412e656b1f4d1f850785f/diff:/var/lib/docker/overlay2/b6670648c10e9f0fae89abf3712a633146e1dab188784d9e481e6f34682a97ec/diff:/var/lib/docker/overlay2/48c415c932354a7d3a3b4427d75d4f1cadb419ee3fa91f3f1ee7e517e77bae7c/diff",
"MergedDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/merged",
"UpperDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/diff",
"WorkDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-20220728205408-9843",
"Source": "/var/lib/docker/volumes/pause-20220728205408-9843/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-20220728205408-9843",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20220728205408-9843",
"name.minikube.sigs.k8s.io": "pause-20220728205408-9843",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "de1ae3fe54f4aa564dd4a7144be9787f992747a29bdd3bd7ed096768d816472a",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49337"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49336"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49333"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49335"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49334"
}
]
},
"SandboxKey": "/var/run/docker/netns/de1ae3fe54f4",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20220728205408-9843": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"3fb439e0c221",
"pause-20220728205408-9843"
],
"NetworkID": "d1fd6d8f5de8a3061a0c5298093dfe26d13957ea22c72fa51e58d52614dcfb64",
"EndpointID": "e0fd9777811b7913fe84e065dfdb9202fd840ff04d87f46242d51f89edec65d6",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220728205408-9843 -n pause-20220728205408-9843
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-20220728205408-9843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220728205408-9843 logs -n 25: (1.539730351s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| profile | list | minikube | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| profile | list --output=json | minikube | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| stop | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| start | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:54 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | |
| | NoKubernetes-20220728205236-9843 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| start | -p pause-20220728205408-9843 | pause-20220728205408-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | stopped-upgrade-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | stopped-upgrade-20220728205236-9843 | | | | | |
| start | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | missing-upgrade-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | missing-upgrade-20220728205236-9843 | | | | | |
| start | -p | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-flag-20220728205429-9843 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | running-upgrade-20220728205348-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | running-upgrade-20220728205348-9843 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-20220728205408-9843 | pause-20220728205408-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-flag-20220728205429-9843 | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-flag-20220728205429-9843 | | | | | |
| stop | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| start | -p | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-env-20220728205516-9843 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | running-upgrade-20220728205348-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | running-upgrade-20220728205348-9843 | | | | | |
| delete | -p flannel-20220728205532-9843 | flannel-20220728205532-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| delete | -p | custom-flannel-20220728205532-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | custom-flannel-20220728205532-9843 | | | | | |
| start | -p | cert-expiration-20220728205533-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | cert-expiration-20220728205533-9843 | | | | | |
| | --memory=2048 --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-env-20220728205516-9843 | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-env-20220728205516-9843 | | | | | |
| start | -p | docker-flags-20220728205555-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | docker-flags-20220728205555-9843 | | | | | |
| | --cache-images=false | | | | | |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=false | | | | | |
| | --docker-env=FOO=BAR | | | | | |
| | --docker-env=BAZ=BAT | | | | | |
| | --docker-opt=debug | | | | | |
| | --docker-opt=icc=true | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/28 20:55:55
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0728 20:55:55.784576 246102 out.go:296] Setting OutFile to fd 1 ...
I0728 20:55:55.784752 246102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:55.784763 246102 out.go:309] Setting ErrFile to fd 2...
I0728 20:55:55.784768 246102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:55.784883 246102 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 20:55:55.785463 246102 out.go:303] Setting JSON to false
I0728 20:55:55.787521 246102 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2307,"bootTime":1659039449,"procs":1333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 20:55:55.787589 246102 start.go:125] virtualization: kvm guest
I0728 20:55:55.790143 246102 out.go:177] * [docker-flags-20220728205555-9843] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 20:55:55.791677 246102 notify.go:193] Checking for updates...
I0728 20:55:55.791678 246102 out.go:177] - MINIKUBE_LOCATION=14555
I0728 20:55:55.793108 246102 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 20:55:55.794463 246102 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:55:55.795950 246102 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 20:55:55.797410 246102 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 20:55:55.799027 246102 config.go:178] Loaded profile config "cert-expiration-20220728205533-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799123 246102 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205415-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799203 246102 config.go:178] Loaded profile config "pause-20220728205408-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799235 246102 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 20:55:55.851158 246102 docker.go:137] docker version: linux-20.10.17
I0728 20:55:55.851253 246102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:55.989992 246102 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 20:55:55.892426044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:55.990103 246102 docker.go:254] overlay module found
I0728 20:55:55.992446 246102 out.go:177] * Using the docker driver based on user configuration
I0728 20:55:55.993845 246102 start.go:284] selected driver: docker
I0728 20:55:55.993855 246102 start.go:808] validating driver "docker" against <nil>
I0728 20:55:55.993873 246102 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 20:55:55.994701 246102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:56.106221 246102 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 20:55:56.025931675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:56.106345 246102 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0728 20:55:56.106503 246102 start_flags.go:848] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I0728 20:55:56.108587 246102 out.go:177] * Using Docker driver with root privileges
I0728 20:55:56.109915 246102 cni.go:95] Creating CNI manager for ""
I0728 20:55:56.109941 246102 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 20:55:56.109956 246102 start_flags.go:310] config:
{Name:docker-flags-20220728205555-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:docker-flags-20220728205555-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:55:56.112303 246102 out.go:177] * Starting control plane node docker-flags-20220728205555-9843 in cluster docker-flags-20220728205555-9843
I0728 20:55:56.113609 246102 cache.go:120] Beginning downloading kic base image for docker with docker
I0728 20:55:56.114828 246102 out.go:177] * Pulling base image ...
I0728 20:55:56.116069 246102 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 20:55:56.116112 246102 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
I0728 20:55:56.116116 246102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
I0728 20:55:56.116129 246102 cache.go:57] Caching tarball of preloaded images
I0728 20:55:56.116358 246102 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 20:55:56.116376 246102 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker
I0728 20:55:56.116495 246102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/docker-flags-20220728205555-9843/config.json ...
I0728 20:55:56.116523 246102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/docker-flags-20220728205555-9843/config.json: {Name:mk87b539a2557392f06fde77c4681d4160025661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 20:55:56.151936 246102 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
I0728 20:55:56.151966 246102 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
I0728 20:55:56.151983 246102 cache.go:208] Successfully downloaded all kic artifacts
I0728 20:55:56.152026 246102 start.go:370] acquiring machines lock for docker-flags-20220728205555-9843: {Name:mk84733713e520b0a2055bad17ffd450c466b09d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 20:55:56.152167 246102 start.go:374] acquired machines lock for "docker-flags-20220728205555-9843" in 119.024µs
I0728 20:55:56.152200 246102 start.go:92] Provisioning new machine with config: &{Name:docker-flags-20220728205555-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:docker-fla
gs-20220728205555-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0728 20:55:56.152305 246102 start.go:132] createHost starting for "" (driver="docker")
I0728 20:55:54.847824 228788 addons.go:414] enableAddons completed in 863.208744ms
I0728 20:55:55.155995 228788 pod_ready.go:92] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.156017 228788 pod_ready.go:81] duration metric: took 400.176809ms waiting for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.156026 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556032 228788 pod_ready.go:92] pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.556067 228788 pod_ready.go:81] duration metric: took 400.032237ms waiting for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556082 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956043 228788 pod_ready.go:92] pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.956069 228788 pod_ready.go:81] duration metric: took 399.978224ms waiting for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956081 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355801 228788 pod_ready.go:92] pod "kube-proxy-bgdg9" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.355824 228788 pod_ready.go:81] duration metric: took 399.733762ms waiting for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355836 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755604 228788 pod_ready.go:92] pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.755626 228788 pod_ready.go:81] duration metric: took 399.78195ms waiting for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755635 228788 pod_ready.go:38] duration metric: took 2.598319546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:56.755657 228788 api_server.go:51] waiting for apiserver process to appear ...
I0728 20:55:56.755703 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:56.765960 228788 api_server.go:71] duration metric: took 2.78138638s to wait for apiserver process to appear ...
I0728 20:55:56.765985 228788 api_server.go:87] waiting for apiserver healthz status ...
I0728 20:55:56.765999 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:56.771391 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0728 20:55:56.772200 228788 api_server.go:140] control plane version: v1.24.3
I0728 20:55:56.772218 228788 api_server.go:130] duration metric: took 6.226542ms to wait for apiserver health ...
I0728 20:55:56.772230 228788 system_pods.go:43] waiting for kube-system pods to appear ...
I0728 20:55:56.958841 228788 system_pods.go:59] 7 kube-system pods found
I0728 20:55:56.958878 228788 system_pods.go:61] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:56.958885 228788 system_pods.go:61] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:56.958893 228788 system_pods.go:61] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:56.958900 228788 system_pods.go:61] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:56.958906 228788 system_pods.go:61] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:56.958913 228788 system_pods.go:61] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:56.958920 228788 system_pods.go:61] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:56.958926 228788 system_pods.go:74] duration metric: took 186.690368ms to wait for pod list to return data ...
I0728 20:55:56.958936 228788 default_sa.go:34] waiting for default service account to be created ...
I0728 20:55:57.157423 228788 default_sa.go:45] found service account: "default"
I0728 20:55:57.157453 228788 default_sa.go:55] duration metric: took 198.506077ms for default service account to be created ...
I0728 20:55:57.157463 228788 system_pods.go:116] waiting for k8s-apps to be running ...
I0728 20:55:57.359189 228788 system_pods.go:86] 7 kube-system pods found
I0728 20:55:57.359220 228788 system_pods.go:89] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:57.359228 228788 system_pods.go:89] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:57.359235 228788 system_pods.go:89] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:57.359241 228788 system_pods.go:89] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:57.359248 228788 system_pods.go:89] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:57.359254 228788 system_pods.go:89] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:57.359260 228788 system_pods.go:89] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:57.359269 228788 system_pods.go:126] duration metric: took 201.799524ms to wait for k8s-apps to be running ...
I0728 20:55:57.359278 228788 system_svc.go:44] waiting for kubelet service to be running ....
I0728 20:55:57.359322 228788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 20:55:57.371297 228788 system_svc.go:56] duration metric: took 12.014273ms WaitForService to wait for kubelet.
I0728 20:55:57.371319 228788 kubeadm.go:572] duration metric: took 3.386749704s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0728 20:55:57.371336 228788 node_conditions.go:102] verifying NodePressure condition ...
I0728 20:55:57.555946 228788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0728 20:55:57.555973 228788 node_conditions.go:123] node cpu capacity is 8
I0728 20:55:57.555986 228788 node_conditions.go:105] duration metric: took 184.644433ms to run NodePressure ...
I0728 20:55:57.556000 228788 start.go:216] waiting for startup goroutines ...
I0728 20:55:57.600867 228788 start.go:506] kubectl: 1.24.3, cluster: 1.24.3 (minor skew: 0)
I0728 20:55:57.603232 228788 out.go:177] * Done! kubectl is now configured to use "pause-20220728205408-9843" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Thu 2022-07-28 20:54:19 UTC, end at Thu 2022-07-28 20:55:58 UTC. --
Jul 28 20:55:24 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:24.987280313Z" level=info msg="ignoring event" container=8d0ece39f0280b35c612035174890dedb1d41ec4d62e59f3e35894e0f08a3196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.245735539Z" level=info msg="Removing stale sandbox 82c4bbb47bdc3bae4db17ca41592fde40f3cca1e775b57c4ed17a686dcf9589d (ff4527bd1426356b58b3574ac17d9a94ac057c3ade1f313a5eb17d9af025aa16)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.247887302Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e 414cb073a1eabea1456982110ac2718a77057b59f37b3d5ad4c6f7723f578c71], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.363249845Z" level=info msg="Removing stale sandbox 922c7a4824c1fb0750ab9ef663519ad0b69c8b09504d61d741dcde20fb3661f0 (a9e0b3140400d291ad3440f978f7affb9892feaa707c01c5920d1e915b15ce63)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.365048372Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e b1615e1e43bac09b4a833ebe3f0da34b70b8dbd9a67978d9ec84ed6ce5ae5626], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.473795736Z" level=info msg="Removing stale sandbox 9c81822bdf2d8243556e813d72daccedf551d43bfa656083923796d756faa545 (8825b8a486cbdd91bd5c0a58a15dd7ddbee55de4faf0ddae1e627a557ebe415d)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.475354252Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e e60c43f93cfb1fdaf2d0b5eab452b6d23828a3eddec60694e842d44d72f72c4f], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.575962139Z" level=info msg="Removing stale sandbox ba953d6dec138b6c7bd2a4eddd60993cd3199d4ee81a00b53a62d563b7ed8bf5 (eb1a00c43d2d85cd2faa65ddfce7aa6a2651bab15974901f423501913e99aee4)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.577832723Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e e6997d36a215d79831d1e9d8756900a73bc72746257d52ab5ea6b68e5e91e87d], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.608682585Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.657651786Z" level=info msg="Loading containers: done."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.671464864Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.671582622Z" level=info msg="Daemon has completed initialization"
Jul 28 20:55:25 pause-20220728205408-9843 systemd[1]: Started Docker Application Container Engine.
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.693559800Z" level=info msg="API listen on [::]:2376"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.698760616Z" level=info msg="API listen on /var/run/docker.sock"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.923009480Z" level=error msg="Failed to compute size of container rootfs 0ebd4e5d69f1cb55d6e2fdd5ee244e6b30cb92e87ae999c0d5937940d11f0a39: mount does not exist"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.976038533Z" level=info msg="ignoring event" container=d508b7234c30c0bb52c17971db4483cd3d0a2a47784c4e3e0b199b37fa757147 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.977080845Z" level=info msg="ignoring event" container=6332eed4991deea6a07be9ab2e12929fb070b22d064bbdc31ba9495dc7b25dea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.979186893Z" level=info msg="ignoring event" container=2ecf26cec7af637681ac3c70fe43c41f179d1722bc77c7a2d2d489cfb915716f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.983819134Z" level=info msg="ignoring event" container=bfbc7b0761b06fc530b6a4879abab0f99c8c2e46ed493db2576c042bb649826c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.780700668Z" level=error msg="69e901f3a577186b2bd9aca5dbeff0ed7fd3b53d068564d1569703a08ed61d76 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.781183574Z" level=error msg="950617f0fbf7b728fa7f89c1592f230f48b53bfd780830b663cdd9f98c8c7250 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.784745423Z" level=error msg="935c125c4397284b77ce4b019ea03e774b5cd922e9950a2fd0caa9c0bcace5e4 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:32 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:32.092495460Z" level=error msg="f8de79ccdd25ee97655a8d1988aef6feeb0fc58ab452d76b3317914482a392ac cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
54535d0f06bff 6e38f40d628db 3 seconds ago Running storage-provisioner 0 1c0b89f9f48bf
9a5ac22b3e259 a4ca41631cc7a 17 seconds ago Running coredns 2 43eeb5ca36c34
aac25227dad35 2ae1ba6417cbc 18 seconds ago Running kube-proxy 3 9ec27cff68a95
3e228f51ae388 3a5aa3a515f5d 24 seconds ago Running kube-scheduler 3 2bbcd5540726b
dcf77ae08999c aebe758cef4cd 24 seconds ago Running etcd 2 ed10281db78a9
4d6bd991be35a 586c112956dfc 24 seconds ago Running kube-controller-manager 3 d8583b123eb3c
1ade8458c928a d521dd763e2e3 24 seconds ago Running kube-apiserver 3 88268d22e7aeb
68f145b308b26 586c112956dfc 26 seconds ago Created kube-controller-manager 2 d8583b123eb3c
5046c6347079c d521dd763e2e3 26 seconds ago Created kube-apiserver 2 88268d22e7aeb
69e901f3a5771 2ae1ba6417cbc 31 seconds ago Created kube-proxy 2 2ecf26cec7af6
950617f0fbf7b aebe758cef4cd 32 seconds ago Created etcd 1 bfbc7b0761b06
f8de79ccdd25e a4ca41631cc7a 32 seconds ago Created coredns 1 d508b7234c30c
935c125c43972 3a5aa3a515f5d 32 seconds ago Created kube-scheduler 2 6332eed4991de
*
* ==> coredns [9a5ac22b3e25] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> coredns [f8de79ccdd25] <==
*
*
* ==> describe nodes <==
* Name: pause-20220728205408-9843
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220728205408-9843
kubernetes.io/os=linux
minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
minikube.k8s.io/name=pause-20220728205408-9843
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_28T20_54_48_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 Jul 2022 20:54:46 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220728205408-9843
AcquireTime: <unset>
RenewTime: Thu, 28 Jul 2022 20:55:59 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-20220728205408-9843
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
System Info:
Machine ID: 855c6c72c86b4657b3d8c3c774fd7e1d
System UUID: c75052a2-fe0e-4168-898e-9d8afef41346
Boot ID: 8561e8cd-909f-419e-9bf9-95dd86404884
Kernel Version: 5.15.0-1013-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.3
Kube-Proxy Version: v1.24.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-8z2tg 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 57s
kube-system etcd-pause-20220728205408-9843 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 70s
kube-system kube-apiserver-pause-20220728205408-9843 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 70s
kube-system kube-controller-manager-pause-20220728205408-9843 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 70s
kube-system kube-proxy-bgdg9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 58s
kube-system kube-scheduler-pause-20220728205408-9843 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 70s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 56s kube-proxy
Normal Starting 17s kube-proxy
Normal NodeHasSufficientMemory 87s (x4 over 87s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 87s (x4 over 87s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 87s (x4 over 87s) kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal Starting 71s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 70s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 70s kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 70s kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 70s kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeNotReady 70s kubelet Node pause-20220728205408-9843 status is now: NodeNotReady
Normal NodeReady 60s kubelet Node pause-20220728205408-9843 status is now: NodeReady
Normal RegisteredNode 58s node-controller Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller
Normal Starting 26s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 26s (x8 over 26s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26s (x8 over 26s) kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26s (x7 over 26s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7s node-controller Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller
*
* ==> dmesg <==
* [ +0.007956] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000965016fe
[ +0.008721] FS-Cache: N-key=[8] '8ba00f0200000000'
[ +0.008448] FS-Cache: Duplicate cookie detected
[ +0.005260] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
[ +0.008126] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000b9a042c5
[ +0.008716] FS-Cache: O-key=[8] '8ba00f0200000000'
[ +0.006289] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007972] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000c2425d5e
[ +0.008703] FS-Cache: N-key=[8] '8ba00f0200000000'
[ +3.219644] FS-Cache: Duplicate cookie detected
[ +0.004687] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006778] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000a3ee3274
[ +0.007384] FS-Cache: O-key=[8] '8aa00f0200000000'
[ +0.004955] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.007943] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000aaf0937f
[ +0.008741] FS-Cache: N-key=[8] '8aa00f0200000000'
[ +0.506795] FS-Cache: Duplicate cookie detected
[ +0.004686] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006760] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000ce7a7e79
[ +0.007354] FS-Cache: O-key=[8] '91a00f0200000000'
[ +0.004925] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007944] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=000000001cb78e37
[ +0.008718] FS-Cache: N-key=[8] '91a00f0200000000'
[Jul28 20:34] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul28 20:53] process 'docker/tmp/qemu-check079626803/check' started with executable stack
*
* ==> etcd [950617f0fbf7] <==
*
*
* ==> etcd [dcf77ae08999] <==
* {"level":"warn","ts":"2022-07-28T20:55:39.846Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.036497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-07-28T20:55:39.846Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.099382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" ","response":"range_response_count:1 size:1929"}
{"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[1277082378] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:394; }","duration":"116.123814ms","start":"2022-07-28T20:55:39.729Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[1277082378] 'agreement among raft nodes before linearized reading' (duration: 116.079309ms)"],"step_count":1}
{"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[1458254766] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:394; }","duration":"107.091801ms","start":"2022-07-28T20:55:39.738Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[1458254766] 'agreement among raft nodes before linearized reading' (duration: 106.933879ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:39.846Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.032298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-bgdg9\" ","response":"range_response_count:1 size:4437"}
{"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[914151284] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-bgdg9; range_end:; response_count:1; response_revision:394; }","duration":"118.211476ms","start":"2022-07-28T20:55:39.727Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[914151284] 'agreement among raft nodes before linearized reading' (duration: 117.890885ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.228Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"343.523664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" ","response":"range_response_count:1 size:3379"}
{"level":"info","ts":"2022-07-28T20:55:40.228Z","caller":"traceutil/trace.go:171","msg":"trace[1208480452] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:399; }","duration":"343.641439ms","start":"2022-07-28T20:55:39.885Z","end":"2022-07-28T20:55:40.228Z","steps":["trace[1208480452] 'agreement among raft nodes before linearized reading' (duration: 88.612437ms)","trace[1208480452] 'range keys from in-memory index tree' (duration: 254.857308ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-28T20:55:39.885Z","time spent":"343.69039ms","remote":"127.0.0.1:36968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":3403,"request content":"key:\"/registry/clusterroles/edit\" "}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.954912ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289942175248618431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" value_size:568 lease:2289942175248618400 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[1197468206] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"341.914118ms","start":"2022-07-28T20:55:39.887Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[1197468206] 'process raft request' (duration: 86.499197ms)","trace[1197468206] 'compare' (duration: 254.731034ms)"],"step_count":2}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[1936646607] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"199.446321ms","start":"2022-07-28T20:55:40.029Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[1936646607] 'process raft request' (duration: 199.376927ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-28T20:55:39.887Z","time spent":"341.98956ms","remote":"127.0.0.1:36884","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":653,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" value_size:568 lease:2289942175248618400 >> failure:<>"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[656430561] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"255.55912ms","start":"2022-07-28T20:55:39.973Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[656430561] 'read index received' (duration: 30.653813ms)","trace[656430561] 'applied index is now lower than readState.Index' (duration: 224.903883ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"289.890026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[126516261] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"289.923404ms","start":"2022-07-28T20:55:39.939Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[126516261] 'agreement among raft nodes before linearized reading' (duration: 289.872865ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.234Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"167.548767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-28T20:55:40.234Z","caller":"traceutil/trace.go:171","msg":"trace[1321501223] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"167.615609ms","start":"2022-07-28T20:55:40.066Z","end":"2022-07-28T20:55:40.234Z","steps":["trace[1321501223] 'agreement among raft nodes before linearized reading' (duration: 167.512863ms)"],"step_count":1}
{"level":"info","ts":"2022-07-28T20:55:40.533Z","caller":"traceutil/trace.go:171","msg":"trace[1260603021] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"102.585942ms","start":"2022-07-28T20:55:40.431Z","end":"2022-07-28T20:55:40.533Z","steps":["trace[1260603021] 'read index received' (duration: 102.564734ms)","trace[1260603021] 'applied index is now lower than readState.Index' (duration: 19.453µs)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.136976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.178241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220728205408-9843.170619aaafec6f93\" ","response":"range_response_count:1 size:689"}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[556432168] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:404; }","duration":"275.266215ms","start":"2022-07-28T20:55:40.338Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[556432168] 'agreement among raft nodes before linearized reading' (duration: 195.563918ms)","trace[556432168] 'range keys from in-memory index tree' (duration: 79.552652ms)"],"step_count":2}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[732415796] range","detail":"{range_begin:/registry/events/default/pause-20220728205408-9843.170619aaafec6f93; range_end:; response_count:1; response_revision:404; }","duration":"274.226139ms","start":"2022-07-28T20:55:40.339Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[732415796] 'agreement among raft nodes before linearized reading' (duration: 194.427306ms)","trace[732415796] 'range keys from in-memory index tree' (duration: 79.720159ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.02147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:1 size:1932"}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[1033466391] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:404; }","duration":"179.303747ms","start":"2022-07-28T20:55:40.434Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[1033466391] 'agreement among raft nodes before linearized reading' (duration: 99.441492ms)","trace[1033466391] 'range keys from in-memory index tree' (duration: 79.531252ms)"],"step_count":2}
*
* ==> kernel <==
* 20:55:59 up 38 min, 0 users, load average: 8.27, 5.46, 2.81
Linux pause-20220728205408-9843 5.15.0-1013-gcp #18~20.04.1-Ubuntu SMP Sun Jul 3 08:20:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [1ade8458c928] <==
* I0728 20:55:38.586687 1 naming_controller.go:291] Starting NamingConditionController
I0728 20:55:38.586725 1 establishing_controller.go:76] Starting EstablishingController
I0728 20:55:38.586755 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0728 20:55:38.586771 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0728 20:55:38.586808 1 crd_finalizer.go:266] Starting CRDFinalizer
I0728 20:55:38.594251 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0728 20:55:38.656832 1 shared_informer.go:262] Caches are synced for node_authorizer
I0728 20:55:38.672358 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0728 20:55:38.672407 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0728 20:55:38.672442 1 cache.go:39] Caches are synced for autoregister controller
I0728 20:55:38.675076 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0728 20:55:38.677315 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0728 20:55:38.677334 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0728 20:55:38.827208 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0728 20:55:38.829060 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0728 20:55:39.301220 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0728 20:55:39.725106 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0728 20:55:41.269651 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0728 20:55:41.279281 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0728 20:55:41.323131 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0728 20:55:41.350600 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0728 20:55:41.368045 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0728 20:55:41.374622 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0728 20:55:52.466665 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0728 20:55:52.578290 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [5046c6347079] <==
*
*
* ==> kube-controller-manager [4d6bd991be35] <==
* I0728 20:55:52.401551 1 shared_informer.go:262] Caches are synced for HPA
I0728 20:55:52.402670 1 shared_informer.go:262] Caches are synced for PVC protection
I0728 20:55:52.405849 1 shared_informer.go:262] Caches are synced for taint
I0728 20:55:52.405930 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W0728 20:55:52.405991 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220728205408-9843. Assuming now as a timestamp.
I0728 20:55:52.405989 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0728 20:55:52.406102 1 event.go:294] "Event occurred" object="pause-20220728205408-9843" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller"
I0728 20:55:52.406122 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0728 20:55:52.408197 1 shared_informer.go:262] Caches are synced for ephemeral
I0728 20:55:52.409107 1 shared_informer.go:262] Caches are synced for deployment
I0728 20:55:52.421694 1 shared_informer.go:262] Caches are synced for TTL
I0728 20:55:52.421760 1 shared_informer.go:262] Caches are synced for namespace
I0728 20:55:52.421838 1 shared_informer.go:262] Caches are synced for PV protection
I0728 20:55:52.458733 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0728 20:55:52.497493 1 shared_informer.go:262] Caches are synced for TTL after finished
I0728 20:55:52.532942 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0728 20:55:52.536189 1 shared_informer.go:262] Caches are synced for resource quota
I0728 20:55:52.541597 1 shared_informer.go:262] Caches are synced for job
I0728 20:55:52.543779 1 shared_informer.go:262] Caches are synced for cronjob
I0728 20:55:52.569436 1 shared_informer.go:262] Caches are synced for endpoint
I0728 20:55:52.614150 1 shared_informer.go:262] Caches are synced for resource quota
I0728 20:55:52.638099 1 shared_informer.go:262] Caches are synced for attach detach
I0728 20:55:53.050038 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 20:55:53.064190 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 20:55:53.064212 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [68f145b308b2] <==
*
*
* ==> kube-proxy [69e901f3a577] <==
*
*
* ==> kube-proxy [aac25227dad3] <==
* I0728 20:55:41.241812 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0728 20:55:41.241885 1 server_others.go:138] "Detected node IP" address="192.168.67.2"
I0728 20:55:41.241917 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0728 20:55:41.281911 1 server_others.go:206] "Using iptables Proxier"
I0728 20:55:41.281955 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0728 20:55:41.281964 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0728 20:55:41.281975 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0728 20:55:41.282004 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 20:55:41.282176 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 20:55:41.282440 1 server.go:661] "Version info" version="v1.24.3"
I0728 20:55:41.282463 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:55:41.313324 1 config.go:226] "Starting endpoint slice config controller"
I0728 20:55:41.313411 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0728 20:55:41.313473 1 config.go:317] "Starting service config controller"
I0728 20:55:41.313497 1 shared_informer.go:255] Waiting for caches to sync for service config
I0728 20:55:41.313569 1 config.go:444] "Starting node config controller"
I0728 20:55:41.313601 1 shared_informer.go:255] Waiting for caches to sync for node config
I0728 20:55:41.414434 1 shared_informer.go:262] Caches are synced for node config
I0728 20:55:41.414459 1 shared_informer.go:262] Caches are synced for service config
I0728 20:55:41.414488 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [3e228f51ae38] <==
* I0728 20:55:35.512759 1 serving.go:348] Generated self-signed cert in-memory
W0728 20:55:38.620149 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0728 20:55:38.620384 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0728 20:55:38.620483 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0728 20:55:38.620569 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0728 20:55:38.630051 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0728 20:55:38.630079 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:55:38.631236 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0728 20:55:38.631279 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0728 20:55:38.631358 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0728 20:55:38.631844 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0728 20:55:38.732391 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [935c125c4397] <==
*
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-07-28 20:54:19 UTC, end at Thu 2022-07-28 20:55:59 UTC. --
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: E0728 20:55:38.456451 5344 kubelet.go:2424] "Error getting node" err="node \"pause-20220728205408-9843\" not found"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: E0728 20:55:38.557302 5344 kubelet.go:2424] "Error getting node" err="node \"pause-20220728205408-9843\" not found"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.658327 5344 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.659018 5344 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.829563 5344 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220728205408-9843"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.829681 5344 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220728205408-9843"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.445065 5344 apiserver.go:52] "Watching apiserver"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449222 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449351 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449425 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563288 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4011009e-4b1d-4e94-9355-f2de01699705-kube-proxy\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563352 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpgs\" (UniqueName: \"kubernetes.io/projected/4011009e-4b1d-4e94-9355-f2de01699705-kube-api-access-7mpgs\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563388 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d00c451-82b6-48d5-beaa-bc5fa6f1a242-config-volume\") pod \"coredns-6d4b75cb6d-8z2tg\" (UID: \"7d00c451-82b6-48d5-beaa-bc5fa6f1a242\") " pod="kube-system/coredns-6d4b75cb6d-8z2tg"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563513 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb5fr\" (UniqueName: \"kubernetes.io/projected/7d00c451-82b6-48d5-beaa-bc5fa6f1a242-kube-api-access-xb5fr\") pod \"coredns-6d4b75cb6d-8z2tg\" (UID: \"7d00c451-82b6-48d5-beaa-bc5fa6f1a242\") " pod="kube-system/coredns-6d4b75cb6d-8z2tg"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563662 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4011009e-4b1d-4e94-9355-f2de01699705-xtables-lock\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563710 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4011009e-4b1d-4e94-9355-f2de01699705-lib-modules\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563731 5344 reconciler.go:157] "Reconciler: start to sync state"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.788639 5344 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bb68aeee-a86e-4279-89bc-6ed7151bdbf1 path="/var/lib/kubelet/pods/bb68aeee-a86e-4279-89bc-6ed7151bdbf1/volumes"
Jul 28 20:55:41 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:41.244504 5344 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="43eeb5ca36c34348d82248a73851d418ba9e36f96995d7fc6f0687a9c76d86d2"
Jul 28 20:55:43 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:43.274287 5344 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 28 20:55:46 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:46.627600 5344 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.844880 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.921218 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbaefc83-1444-4c06-837a-3e437bbb77f2-tmp\") pod \"storage-provisioner\" (UID: \"bbaefc83-1444-4c06-837a-3e437bbb77f2\") " pod="kube-system/storage-provisioner"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.921288 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8kw\" (UniqueName: \"kubernetes.io/projected/bbaefc83-1444-4c06-837a-3e437bbb77f2-kube-api-access-5w8kw\") pod \"storage-provisioner\" (UID: \"bbaefc83-1444-4c06-837a-3e437bbb77f2\") " pod="kube-system/storage-provisioner"
Jul 28 20:55:55 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:55.384155 5344 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1c0b89f9f48bfa40fdc96cc0e2834f93eee33ab5cb93d5e930f68eb8b9905c6c"
*
* ==> storage-provisioner [54535d0f06bf] <==
* I0728 20:55:55.586652 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0728 20:55:55.596637 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0728 20:55:55.596678 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0728 20:55:55.619218 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0728 20:55:55.619423 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea!
I0728 20:55:55.619749 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a49a8a0b-b4bb-49f2-8733-af4df39f4401", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea became leader
I0728 20:55:55.720359 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220728205408-9843 -n pause-20220728205408-9843
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220728205408-9843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220728205408-9843 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220728205408-9843 describe pod : exit status 1 (41.543083ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220728205408-9843 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-20220728205408-9843
helpers_test.go:235: (dbg) docker inspect pause-20220728205408-9843:
-- stdout --
[
{
"Id": "3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6",
"Created": "2022-07-28T20:54:18.261610669Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 212639,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-28T20:54:18.951427486Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
"ResolvConfPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/hostname",
"HostsPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/hosts",
"LogPath": "/var/lib/docker/containers/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6/3fb439e0c221a49b3872aa9a8448387885fde8445e2e7a49655b9b12f3fccaa6-json.log",
"Name": "/pause-20220728205408-9843",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20220728205408-9843:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20220728205408-9843",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2-init/diff:/var/lib/docker/overlay2/4199c6556cea42181cceb9906f820822443cb0b5f7ea328eca775adfd8d1708e/diff:/var/lib/docker/overlay2/b24dbffe8a02c42d5181cc376e50868eb2daf54ff07b615d6650ec5b797535fd/diff:/var/lib/docker/overlay2/660766159b0d261999d298e45910b16a8946daaf2bf143c351a5952f21e4ce59/diff:/var/lib/docker/overlay2/67889e79cf7ba13abbae4a874d98b70aa2870c90f7da1e9b916698f4d0d5a13d/diff:/var/lib/docker/overlay2/3134c941b14d494841493f2137121da94dc3191e293f397a832ccac047a6f9c5/diff:/var/lib/docker/overlay2/ba80490bd9be017b8e9ce98fe920b11a99bb56a4f613da7ea777e8e851200451/diff:/var/lib/docker/overlay2/d301c74cb5339bc9bc007153c8b68524f72892fecb626edc1dea29337889ee67/diff:/var/lib/docker/overlay2/b3adebbc2adf56d5de785c0fa4b2127a191eae5274ad6161012f59e1e8136eaa/diff:/var/lib/docker/overlay2/4cdbbd63ba765f5ed0d2515ec5a80ee18ca61d7200dc66dcd3e6829397bd15a6/diff:/var/lib/docker/overlay2/2441c7
ad19ee32fbf2a557b7a3303aa6670350555c7e57e8228795df81c39274/diff:/var/lib/docker/overlay2/fb5a05c8be9267430588e13bc96d29bf5eba6cfe65dffdd53b5c68a683371f07/diff:/var/lib/docker/overlay2/f5e54dab2e893d70cd9adea1081cb9d970e6fb6673ebb766c8c43f965e8186e8/diff:/var/lib/docker/overlay2/3ccbd571ef95f280a051b35e3f349d19e0d171bdc863fd1622a70ccc196855b6/diff:/var/lib/docker/overlay2/fabef18c4c7604f7510412b2942fc2af70e76c7f80369d546f330590988ed056/diff:/var/lib/docker/overlay2/dcab26aa57a8ce8a859c2adbc1adb22ef65acdc1a83ebc6b18a60c7a3cb3dd8f/diff:/var/lib/docker/overlay2/13603af21440f26866999875a26f9e88ec243a0101a9cda64d429141bd045fe7/diff:/var/lib/docker/overlay2/67d9e222c9fb201942639f613e92f686d758195118b7a10c49b882833f665253/diff:/var/lib/docker/overlay2/ebe921e8d0699dadbcaa5a4f92eb5b5c7aa68df1e9acf6ce0fc6b0161834cce3/diff:/var/lib/docker/overlay2/35a5973a506a54dbb275f743e439497153e9b7593061b61cecc318dde62545fb/diff:/var/lib/docker/overlay2/701264336a77431a21c1b6efed245aa3629feb70775affd7f67ffffa391a026c/diff:/var/lib/d
ocker/overlay2/8b6aa1b091bb5f97e231465ecd79889b8962ff7cc7b7a1956c8a434e2f44b664/diff:/var/lib/docker/overlay2/52e4531bcb1cdcae7ebcbcbc8239139b29a4a7ba9031c55b597f916f9802c576/diff:/var/lib/docker/overlay2/8956f31605b88ccd217e895de5f2163b2a86ee944bd4805c5bdf3fe58b782e92/diff:/var/lib/docker/overlay2/3a10bd0e6ad21b576c4b0e30403206fbd25df5a4f315cd84456a7f40e75f50cd/diff:/var/lib/docker/overlay2/f6a9cb2902dbdf745c3df96bdc9e81f352e27221eed3b83aa3a6fa02dc18d4d5/diff:/var/lib/docker/overlay2/4f12f2c1a00a4dc134d7f6e835417ec23ea905b5a638fdfd5f82d20997965ec3/diff:/var/lib/docker/overlay2/98865ed752433b8b59e76055af6cae52e491a81b0c202e53aa7671e79f706b13/diff:/var/lib/docker/overlay2/ae14114745b79ff18616d57b4a38e2c8781f15de8717176527f6c4ae0de5f8a7/diff:/var/lib/docker/overlay2/75bf9540aad128f56a4431eec06aa8215df35dc327c7531a20a38b31461bc43b/diff:/var/lib/docker/overlay2/26471df49233e2f02186ae90a4b4adad63dcce28b56378d3feaad1ac8da0a6f8/diff:/var/lib/docker/overlay2/475fcc49601486c8df0f1e0ccb755565cee6d9a3649121772fc715bdf9c
748f8/diff:/var/lib/docker/overlay2/ee157a254a59f89ceea14fd14a197d0966e2e812063c3a491b87f19c4d67750b/diff:/var/lib/docker/overlay2/fa44cebeeaea1c19fcd1d09cc224ba414119c57cd51541b89aa402fcc1466fd7/diff:/var/lib/docker/overlay2/1e2d8d61412a316e76385403d91cbb4a6e26362794eed86be6a433dc3eb5794a/diff:/var/lib/docker/overlay2/73f555b30601b74f8524bed7819729e89228457af1e276f81826c636105fe3b7/diff:/var/lib/docker/overlay2/b52b80028134836783b3a48186aa135cb8753d31f7c8caf150aa4cdb4103e90d/diff:/var/lib/docker/overlay2/8da3d321a6e82f660eb09cd89bdbf24366d82517326dec73ba25c01ff4c2f266/diff:/var/lib/docker/overlay2/4f1a38dc276744ded9fa33010b170d716f2fbe0724cc13ec8f8472d3d9f194d6/diff:/var/lib/docker/overlay2/7159ebfec82182edf221de518b1790661ad26171e64412e656b1f4d1f850785f/diff:/var/lib/docker/overlay2/b6670648c10e9f0fae89abf3712a633146e1dab188784d9e481e6f34682a97ec/diff:/var/lib/docker/overlay2/48c415c932354a7d3a3b4427d75d4f1cadb419ee3fa91f3f1ee7e517e77bae7c/diff",
"MergedDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/merged",
"UpperDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/diff",
"WorkDir": "/var/lib/docker/overlay2/c9095ec585f620b2b7b5b07073d9a6f9f0ffcba74afec8cde45e225fa2d9add2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-20220728205408-9843",
"Source": "/var/lib/docker/volumes/pause-20220728205408-9843/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-20220728205408-9843",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20220728205408-9843",
"name.minikube.sigs.k8s.io": "pause-20220728205408-9843",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "de1ae3fe54f4aa564dd4a7144be9787f992747a29bdd3bd7ed096768d816472a",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49337"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49336"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49333"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49335"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49334"
}
]
},
"SandboxKey": "/var/run/docker/netns/de1ae3fe54f4",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20220728205408-9843": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"3fb439e0c221",
"pause-20220728205408-9843"
],
"NetworkID": "d1fd6d8f5de8a3061a0c5298093dfe26d13957ea22c72fa51e58d52614dcfb64",
"EndpointID": "e0fd9777811b7913fe84e065dfdb9202fd840ff04d87f46242d51f89edec65d6",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220728205408-9843 -n pause-20220728205408-9843
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-20220728205408-9843 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220728205408-9843 logs -n 25: (3.975808767s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| profile | list | minikube | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| profile | list --output=json | minikube | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| stop | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:53 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| start | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:53 UTC | 28 Jul 22 20:54 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | |
| | NoKubernetes-20220728205236-9843 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | NoKubernetes-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | NoKubernetes-20220728205236-9843 | | | | | |
| start | -p pause-20220728205408-9843 | pause-20220728205408-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | stopped-upgrade-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | stopped-upgrade-20220728205236-9843 | | | | | |
| start | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | missing-upgrade-20220728205236-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:54 UTC |
| | missing-upgrade-20220728205236-9843 | | | | | |
| start | -p | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-flag-20220728205429-9843 | | | | | |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | running-upgrade-20220728205348-9843 | jenkins | v1.26.0 | 28 Jul 22 20:54 UTC | 28 Jul 22 20:55 UTC |
| | running-upgrade-20220728205348-9843 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-20220728205408-9843 | pause-20220728205408-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-flag-20220728205429-9843 | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-flag-20220728205429-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-flag-20220728205429-9843 | | | | | |
| stop | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| start | -p | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-env-20220728205516-9843 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p | kubernetes-upgrade-20220728205415-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | kubernetes-upgrade-20220728205415-9843 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.3 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p | running-upgrade-20220728205348-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | running-upgrade-20220728205348-9843 | | | | | |
| delete | -p flannel-20220728205532-9843 | flannel-20220728205532-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| delete | -p | custom-flannel-20220728205532-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | custom-flannel-20220728205532-9843 | | | | | |
| start | -p | cert-expiration-20220728205533-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | cert-expiration-20220728205533-9843 | | | | | |
| | --memory=2048 --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-env-20220728205516-9843 | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p | force-systemd-env-20220728205516-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | 28 Jul 22 20:55 UTC |
| | force-systemd-env-20220728205516-9843 | | | | | |
| start | -p | docker-flags-20220728205555-9843 | jenkins | v1.26.0 | 28 Jul 22 20:55 UTC | |
| | docker-flags-20220728205555-9843 | | | | | |
| | --cache-images=false | | | | | |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=false | | | | | |
| | --docker-env=FOO=BAR | | | | | |
| | --docker-env=BAZ=BAT | | | | | |
| | --docker-opt=debug | | | | | |
| | --docker-opt=icc=true | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/28 20:55:55
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0728 20:55:55.784576 246102 out.go:296] Setting OutFile to fd 1 ...
I0728 20:55:55.784752 246102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:55.784763 246102 out.go:309] Setting ErrFile to fd 2...
I0728 20:55:55.784768 246102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 20:55:55.784883 246102 root.go:332] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 20:55:55.785463 246102 out.go:303] Setting JSON to false
I0728 20:55:55.787521 246102 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2307,"bootTime":1659039449,"procs":1333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1013-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0728 20:55:55.787589 246102 start.go:125] virtualization: kvm guest
I0728 20:55:55.790143 246102 out.go:177] * [docker-flags-20220728205555-9843] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0728 20:55:55.791677 246102 notify.go:193] Checking for updates...
I0728 20:55:55.791678 246102 out.go:177] - MINIKUBE_LOCATION=14555
I0728 20:55:55.793108 246102 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0728 20:55:55.794463 246102 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
I0728 20:55:55.795950 246102 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
I0728 20:55:55.797410 246102 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0728 20:55:55.799027 246102 config.go:178] Loaded profile config "cert-expiration-20220728205533-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799123 246102 config.go:178] Loaded profile config "kubernetes-upgrade-20220728205415-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799203 246102 config.go:178] Loaded profile config "pause-20220728205408-9843": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 20:55:55.799235 246102 driver.go:365] Setting default libvirt URI to qemu:///system
I0728 20:55:55.851158 246102 docker.go:137] docker version: linux-20.10.17
I0728 20:55:55.851253 246102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:55.989992 246102 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 20:55:55.892426044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:55.990103 246102 docker.go:254] overlay module found
I0728 20:55:55.992446 246102 out.go:177] * Using the docker driver based on user configuration
I0728 20:55:55.993845 246102 start.go:284] selected driver: docker
I0728 20:55:55.993855 246102 start.go:808] validating driver "docker" against <nil>
I0728 20:55:55.993873 246102 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0728 20:55:55.994701 246102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0728 20:55:56.106221 246102 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-07-28 20:55:56.025931675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1013-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0728 20:55:56.106345 246102 start_flags.go:296] no existing cluster config was found, will generate one from the flags
I0728 20:55:56.106503 246102 start_flags.go:848] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
I0728 20:55:56.108587 246102 out.go:177] * Using Docker driver with root privileges
I0728 20:55:56.109915 246102 cni.go:95] Creating CNI manager for ""
I0728 20:55:56.109941 246102 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0728 20:55:56.109956 246102 start_flags.go:310] config:
{Name:docker-flags-20220728205555-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:docker-flags-20220728205555-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0728 20:55:56.112303 246102 out.go:177] * Starting control plane node docker-flags-20220728205555-9843 in cluster docker-flags-20220728205555-9843
I0728 20:55:56.113609 246102 cache.go:120] Beginning downloading kic base image for docker with docker
I0728 20:55:56.114828 246102 out.go:177] * Pulling base image ...
I0728 20:55:56.116069 246102 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 20:55:56.116112 246102 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
I0728 20:55:56.116116 246102 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
I0728 20:55:56.116129 246102 cache.go:57] Caching tarball of preloaded images
I0728 20:55:56.116358 246102 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0728 20:55:56.116376 246102 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker
I0728 20:55:56.116495 246102 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/docker-flags-20220728205555-9843/config.json ...
I0728 20:55:56.116523 246102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/docker-flags-20220728205555-9843/config.json: {Name:mk87b539a2557392f06fde77c4681d4160025661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0728 20:55:56.151936 246102 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
I0728 20:55:56.151966 246102 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
I0728 20:55:56.151983 246102 cache.go:208] Successfully downloaded all kic artifacts
I0728 20:55:56.152026 246102 start.go:370] acquiring machines lock for docker-flags-20220728205555-9843: {Name:mk84733713e520b0a2055bad17ffd450c466b09d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0728 20:55:56.152167 246102 start.go:374] acquired machines lock for "docker-flags-20220728205555-9843" in 119.024µs
I0728 20:55:56.152200 246102 start.go:92] Provisioning new machine with config: &{Name:docker-flags-20220728205555-9843 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:docker-fla
gs-20220728205555-9843 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0728 20:55:56.152305 246102 start.go:132] createHost starting for "" (driver="docker")
I0728 20:55:54.847824 228788 addons.go:414] enableAddons completed in 863.208744ms
I0728 20:55:55.155995 228788 pod_ready.go:92] pod "etcd-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.156017 228788 pod_ready.go:81] duration metric: took 400.176809ms waiting for pod "etcd-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.156026 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556032 228788 pod_ready.go:92] pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.556067 228788 pod_ready.go:81] duration metric: took 400.032237ms waiting for pod "kube-apiserver-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.556082 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956043 228788 pod_ready.go:92] pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:55.956069 228788 pod_ready.go:81] duration metric: took 399.978224ms waiting for pod "kube-controller-manager-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:55.956081 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355801 228788 pod_ready.go:92] pod "kube-proxy-bgdg9" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.355824 228788 pod_ready.go:81] duration metric: took 399.733762ms waiting for pod "kube-proxy-bgdg9" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.355836 228788 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755604 228788 pod_ready.go:92] pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace has status "Ready":"True"
I0728 20:55:56.755626 228788 pod_ready.go:81] duration metric: took 399.78195ms waiting for pod "kube-scheduler-pause-20220728205408-9843" in "kube-system" namespace to be "Ready" ...
I0728 20:55:56.755635 228788 pod_ready.go:38] duration metric: took 2.598319546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0728 20:55:56.755657 228788 api_server.go:51] waiting for apiserver process to appear ...
I0728 20:55:56.755703 228788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 20:55:56.765960 228788 api_server.go:71] duration metric: took 2.78138638s to wait for apiserver process to appear ...
I0728 20:55:56.765985 228788 api_server.go:87] waiting for apiserver healthz status ...
I0728 20:55:56.765999 228788 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0728 20:55:56.771391 228788 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0728 20:55:56.772200 228788 api_server.go:140] control plane version: v1.24.3
I0728 20:55:56.772218 228788 api_server.go:130] duration metric: took 6.226542ms to wait for apiserver health ...
I0728 20:55:56.772230 228788 system_pods.go:43] waiting for kube-system pods to appear ...
I0728 20:55:56.958841 228788 system_pods.go:59] 7 kube-system pods found
I0728 20:55:56.958878 228788 system_pods.go:61] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:56.958885 228788 system_pods.go:61] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:56.958893 228788 system_pods.go:61] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:56.958900 228788 system_pods.go:61] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:56.958906 228788 system_pods.go:61] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:56.958913 228788 system_pods.go:61] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:56.958920 228788 system_pods.go:61] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:56.958926 228788 system_pods.go:74] duration metric: took 186.690368ms to wait for pod list to return data ...
I0728 20:55:56.958936 228788 default_sa.go:34] waiting for default service account to be created ...
I0728 20:55:57.157423 228788 default_sa.go:45] found service account: "default"
I0728 20:55:57.157453 228788 default_sa.go:55] duration metric: took 198.506077ms for default service account to be created ...
I0728 20:55:57.157463 228788 system_pods.go:116] waiting for k8s-apps to be running ...
I0728 20:55:57.359189 228788 system_pods.go:86] 7 kube-system pods found
I0728 20:55:57.359220 228788 system_pods.go:89] "coredns-6d4b75cb6d-8z2tg" [7d00c451-82b6-48d5-beaa-bc5fa6f1a242] Running
I0728 20:55:57.359228 228788 system_pods.go:89] "etcd-pause-20220728205408-9843" [6559fba2-a38c-4c54-8cb2-2a370253e16a] Running
I0728 20:55:57.359235 228788 system_pods.go:89] "kube-apiserver-pause-20220728205408-9843" [3b2dac3b-2cc5-4181-9ec8-1b75aa19132e] Running
I0728 20:55:57.359241 228788 system_pods.go:89] "kube-controller-manager-pause-20220728205408-9843" [2d218ea7-e53d-42a8-a37c-84c4f5057aae] Running
I0728 20:55:57.359248 228788 system_pods.go:89] "kube-proxy-bgdg9" [4011009e-4b1d-4e94-9355-f2de01699705] Running
I0728 20:55:57.359254 228788 system_pods.go:89] "kube-scheduler-pause-20220728205408-9843" [5cd50c77-b5ad-4ba4-83f0-c5c191231738] Running
I0728 20:55:57.359260 228788 system_pods.go:89] "storage-provisioner" [bbaefc83-1444-4c06-837a-3e437bbb77f2] Running
I0728 20:55:57.359269 228788 system_pods.go:126] duration metric: took 201.799524ms to wait for k8s-apps to be running ...
I0728 20:55:57.359278 228788 system_svc.go:44] waiting for kubelet service to be running ....
I0728 20:55:57.359322 228788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0728 20:55:57.371297 228788 system_svc.go:56] duration metric: took 12.014273ms WaitForService to wait for kubelet.
I0728 20:55:57.371319 228788 kubeadm.go:572] duration metric: took 3.386749704s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0728 20:55:57.371336 228788 node_conditions.go:102] verifying NodePressure condition ...
I0728 20:55:57.555946 228788 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0728 20:55:57.555973 228788 node_conditions.go:123] node cpu capacity is 8
I0728 20:55:57.555986 228788 node_conditions.go:105] duration metric: took 184.644433ms to run NodePressure ...
I0728 20:55:57.556000 228788 start.go:216] waiting for startup goroutines ...
I0728 20:55:57.600867 228788 start.go:506] kubectl: 1.24.3, cluster: 1.24.3 (minor skew: 0)
I0728 20:55:57.603232 228788 out.go:177] * Done! kubectl is now configured to use "pause-20220728205408-9843" cluster and "default" namespace by default
I0728 20:55:57.480068 234130 api_server.go:256] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0728 20:55:57.979265 234130 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0728 20:55:56.154437 246102 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0728 20:55:56.154703 246102 start.go:166] libmachine.API.Create for "docker-flags-20220728205555-9843" (driver="docker")
I0728 20:55:56.154737 246102 client.go:168] LocalClient.Create starting
I0728 20:55:56.154808 246102 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
I0728 20:55:56.154846 246102 main.go:134] libmachine: Decoding PEM data...
I0728 20:55:56.154865 246102 main.go:134] libmachine: Parsing certificate...
I0728 20:55:56.154939 246102 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
I0728 20:55:56.154964 246102 main.go:134] libmachine: Decoding PEM data...
I0728 20:55:56.154982 246102 main.go:134] libmachine: Parsing certificate...
I0728 20:55:56.155382 246102 cli_runner.go:164] Run: docker network inspect docker-flags-20220728205555-9843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0728 20:55:56.189632 246102 cli_runner.go:211] docker network inspect docker-flags-20220728205555-9843 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0728 20:55:56.189695 246102 network_create.go:272] running [docker network inspect docker-flags-20220728205555-9843] to gather additional debugging logs...
I0728 20:55:56.189716 246102 cli_runner.go:164] Run: docker network inspect docker-flags-20220728205555-9843
W0728 20:55:56.224146 246102 cli_runner.go:211] docker network inspect docker-flags-20220728205555-9843 returned with exit code 1
I0728 20:55:56.224180 246102 network_create.go:275] error running [docker network inspect docker-flags-20220728205555-9843]: docker network inspect docker-flags-20220728205555-9843: exit status 1
stdout:
[]
stderr:
Error: No such network: docker-flags-20220728205555-9843
I0728 20:55:56.224195 246102 network_create.go:277] output of [docker network inspect docker-flags-20220728205555-9843]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: docker-flags-20220728205555-9843
** /stderr **
I0728 20:55:56.224247 246102 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0728 20:55:56.263061 246102 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-db1745fdf436 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:50:7f:81:ac}}
I0728 20:55:56.263937 246102 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d352b530b751 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:70:b5:c6:04}}
I0728 20:55:56.264821 246102 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-d1fd6d8f5de8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:70:05:5f:19}}
I0728 20:55:56.265852 246102 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc000118200] misses:0}
I0728 20:55:56.265901 246102 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0728 20:55:56.265923 246102 network_create.go:115] attempt to create docker network docker-flags-20220728205555-9843 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0728 20:55:56.265981 246102 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-20220728205555-9843 docker-flags-20220728205555-9843
I0728 20:55:56.338502 246102 network_create.go:99] docker network docker-flags-20220728205555-9843 192.168.76.0/24 created
I0728 20:55:56.338562 246102 kic.go:106] calculated static IP "192.168.76.2" for the "docker-flags-20220728205555-9843" container
I0728 20:55:56.338628 246102 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0728 20:55:56.378135 246102 cli_runner.go:164] Run: docker volume create docker-flags-20220728205555-9843 --label name.minikube.sigs.k8s.io=docker-flags-20220728205555-9843 --label created_by.minikube.sigs.k8s.io=true
I0728 20:55:56.417151 246102 oci.go:103] Successfully created a docker volume docker-flags-20220728205555-9843
I0728 20:55:56.417249 246102 cli_runner.go:164] Run: docker run --rm --name docker-flags-20220728205555-9843-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220728205555-9843 --entrypoint /usr/bin/test -v docker-flags-20220728205555-9843:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
I0728 20:55:57.048912 246102 oci.go:107] Successfully prepared a docker volume docker-flags-20220728205555-9843
I0728 20:55:57.048958 246102 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0728 20:55:57.048979 246102 kic.go:179] Starting extracting preloaded images to volume ...
I0728 20:55:57.049047 246102 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14555-3276-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220728205555-9843:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
*
* ==> Docker <==
* -- Logs begin at Thu 2022-07-28 20:54:19 UTC, end at Thu 2022-07-28 20:56:03 UTC. --
Jul 28 20:55:24 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:24.987280313Z" level=info msg="ignoring event" container=8d0ece39f0280b35c612035174890dedb1d41ec4d62e59f3e35894e0f08a3196 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.245735539Z" level=info msg="Removing stale sandbox 82c4bbb47bdc3bae4db17ca41592fde40f3cca1e775b57c4ed17a686dcf9589d (ff4527bd1426356b58b3574ac17d9a94ac057c3ade1f313a5eb17d9af025aa16)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.247887302Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e 414cb073a1eabea1456982110ac2718a77057b59f37b3d5ad4c6f7723f578c71], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.363249845Z" level=info msg="Removing stale sandbox 922c7a4824c1fb0750ab9ef663519ad0b69c8b09504d61d741dcde20fb3661f0 (a9e0b3140400d291ad3440f978f7affb9892feaa707c01c5920d1e915b15ce63)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.365048372Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e b1615e1e43bac09b4a833ebe3f0da34b70b8dbd9a67978d9ec84ed6ce5ae5626], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.473795736Z" level=info msg="Removing stale sandbox 9c81822bdf2d8243556e813d72daccedf551d43bfa656083923796d756faa545 (8825b8a486cbdd91bd5c0a58a15dd7ddbee55de4faf0ddae1e627a557ebe415d)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.475354252Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e e60c43f93cfb1fdaf2d0b5eab452b6d23828a3eddec60694e842d44d72f72c4f], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.575962139Z" level=info msg="Removing stale sandbox ba953d6dec138b6c7bd2a4eddd60993cd3199d4ee81a00b53a62d563b7ed8bf5 (eb1a00c43d2d85cd2faa65ddfce7aa6a2651bab15974901f423501913e99aee4)"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.577832723Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint bdc1ef4d0ee5b9e825bec1b826d95e1d783940decf573efdb1ceb0da298fc65e e6997d36a215d79831d1e9d8756900a73bc72746257d52ab5ea6b68e5e91e87d], retrying...."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.608682585Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.657651786Z" level=info msg="Loading containers: done."
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.671464864Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.671582622Z" level=info msg="Daemon has completed initialization"
Jul 28 20:55:25 pause-20220728205408-9843 systemd[1]: Started Docker Application Container Engine.
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.693559800Z" level=info msg="API listen on [::]:2376"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.698760616Z" level=info msg="API listen on /var/run/docker.sock"
Jul 28 20:55:25 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:25.923009480Z" level=error msg="Failed to compute size of container rootfs 0ebd4e5d69f1cb55d6e2fdd5ee244e6b30cb92e87ae999c0d5937940d11f0a39: mount does not exist"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.976038533Z" level=info msg="ignoring event" container=d508b7234c30c0bb52c17971db4483cd3d0a2a47784c4e3e0b199b37fa757147 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.977080845Z" level=info msg="ignoring event" container=6332eed4991deea6a07be9ab2e12929fb070b22d064bbdc31ba9495dc7b25dea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.979186893Z" level=info msg="ignoring event" container=2ecf26cec7af637681ac3c70fe43c41f179d1722bc77c7a2d2d489cfb915716f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:30 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:30.983819134Z" level=info msg="ignoring event" container=bfbc7b0761b06fc530b6a4879abab0f99c8c2e46ed493db2576c042bb649826c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.780700668Z" level=error msg="69e901f3a577186b2bd9aca5dbeff0ed7fd3b53d068564d1569703a08ed61d76 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.781183574Z" level=error msg="950617f0fbf7b728fa7f89c1592f230f48b53bfd780830b663cdd9f98c8c7250 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:31 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:31.784745423Z" level=error msg="935c125c4397284b77ce4b019ea03e774b5cd922e9950a2fd0caa9c0bcace5e4 cleanup: failed to delete container from containerd: no such container"
Jul 28 20:55:32 pause-20220728205408-9843 dockerd[4023]: time="2022-07-28T20:55:32.092495460Z" level=error msg="f8de79ccdd25ee97655a8d1988aef6feeb0fc58ab452d76b3317914482a392ac cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
54535d0f06bff 6e38f40d628db 8 seconds ago Running storage-provisioner 0 1c0b89f9f48bf
9a5ac22b3e259 a4ca41631cc7a 22 seconds ago Running coredns 2 43eeb5ca36c34
aac25227dad35 2ae1ba6417cbc 23 seconds ago Running kube-proxy 3 9ec27cff68a95
3e228f51ae388 3a5aa3a515f5d 29 seconds ago Running kube-scheduler 3 2bbcd5540726b
dcf77ae08999c aebe758cef4cd 29 seconds ago Running etcd 2 ed10281db78a9
4d6bd991be35a 586c112956dfc 29 seconds ago Running kube-controller-manager 3 d8583b123eb3c
1ade8458c928a d521dd763e2e3 29 seconds ago Running kube-apiserver 3 88268d22e7aeb
68f145b308b26 586c112956dfc 31 seconds ago Created kube-controller-manager 2 d8583b123eb3c
5046c6347079c d521dd763e2e3 31 seconds ago Created kube-apiserver 2 88268d22e7aeb
69e901f3a5771 2ae1ba6417cbc 36 seconds ago Created kube-proxy 2 2ecf26cec7af6
950617f0fbf7b aebe758cef4cd 37 seconds ago Created etcd 1 bfbc7b0761b06
f8de79ccdd25e a4ca41631cc7a 37 seconds ago Created coredns 1 d508b7234c30c
935c125c43972 3a5aa3a515f5d 37 seconds ago Created kube-scheduler 2 6332eed4991de
*
* ==> coredns [9a5ac22b3e25] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 821b10ea3c4cce3a8581cf6a394d92f0
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> coredns [f8de79ccdd25] <==
*
*
* ==> describe nodes <==
* Name: pause-20220728205408-9843
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220728205408-9843
kubernetes.io/os=linux
minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
minikube.k8s.io/name=pause-20220728205408-9843
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_28T20_54_48_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 28 Jul 2022 20:54:46 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220728205408-9843
AcquireTime: <unset>
RenewTime: Thu, 28 Jul 2022 20:55:59 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 28 Jul 2022 20:55:38 +0000 Thu, 28 Jul 2022 20:54:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-20220728205408-9843
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873484Ki
pods: 110
System Info:
Machine ID: 855c6c72c86b4657b3d8c3c774fd7e1d
System UUID: c75052a2-fe0e-4168-898e-9d8afef41346
Boot ID: 8561e8cd-909f-419e-9bf9-95dd86404884
Kernel Version: 5.15.0-1013-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.3
Kube-Proxy Version: v1.24.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-8z2tg 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 62s
kube-system etcd-pause-20220728205408-9843 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 75s
kube-system kube-apiserver-pause-20220728205408-9843 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 75s
kube-system kube-controller-manager-pause-20220728205408-9843 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 75s
kube-system kube-proxy-bgdg9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 63s
kube-system kube-scheduler-pause-20220728205408-9843 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 75s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 61s kube-proxy
Normal Starting 22s kube-proxy
Normal NodeHasSufficientMemory 92s (x4 over 92s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 92s (x4 over 92s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 92s (x4 over 92s) kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal Starting 76s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 75s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 75s kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 75s kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 75s kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeNotReady 75s kubelet Node pause-20220728205408-9843 status is now: NodeNotReady
Normal NodeReady 65s kubelet Node pause-20220728205408-9843 status is now: NodeReady
Normal RegisteredNode 63s node-controller Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller
Normal Starting 31s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 31s (x8 over 31s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 31s (x8 over 31s) kubelet Node pause-20220728205408-9843 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 31s (x7 over 31s) kubelet Node pause-20220728205408-9843 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 31s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 12s node-controller Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller
*
* ==> dmesg <==
* [ +0.007956] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000965016fe
[ +0.008721] FS-Cache: N-key=[8] '8ba00f0200000000'
[ +0.008448] FS-Cache: Duplicate cookie detected
[ +0.005260] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
[ +0.008126] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000b9a042c5
[ +0.008716] FS-Cache: O-key=[8] '8ba00f0200000000'
[ +0.006289] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007972] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000c2425d5e
[ +0.008703] FS-Cache: N-key=[8] '8ba00f0200000000'
[ +3.219644] FS-Cache: Duplicate cookie detected
[ +0.004687] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006778] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000a3ee3274
[ +0.007384] FS-Cache: O-key=[8] '8aa00f0200000000'
[ +0.004955] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.007943] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=00000000aaf0937f
[ +0.008741] FS-Cache: N-key=[8] '8aa00f0200000000'
[ +0.506795] FS-Cache: Duplicate cookie detected
[ +0.004686] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006760] FS-Cache: O-cookie d=000000002c46670e{9p.inode} n=00000000ce7a7e79
[ +0.007354] FS-Cache: O-key=[8] '91a00f0200000000'
[ +0.004925] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.007944] FS-Cache: N-cookie d=000000002c46670e{9p.inode} n=000000001cb78e37
[ +0.008718] FS-Cache: N-key=[8] '91a00f0200000000'
[Jul28 20:34] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul28 20:53] process 'docker/tmp/qemu-check079626803/check' started with executable stack
*
* ==> etcd [950617f0fbf7] <==
*
*
* ==> etcd [dcf77ae08999] <==
* {"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[1277082378] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:394; }","duration":"116.123814ms","start":"2022-07-28T20:55:39.729Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[1277082378] 'agreement among raft nodes before linearized reading' (duration: 116.079309ms)"],"step_count":1}
{"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[1458254766] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:394; }","duration":"107.091801ms","start":"2022-07-28T20:55:39.738Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[1458254766] 'agreement among raft nodes before linearized reading' (duration: 106.933879ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:39.846Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.032298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-bgdg9\" ","response":"range_response_count:1 size:4437"}
{"level":"info","ts":"2022-07-28T20:55:39.846Z","caller":"traceutil/trace.go:171","msg":"trace[914151284] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-bgdg9; range_end:; response_count:1; response_revision:394; }","duration":"118.211476ms","start":"2022-07-28T20:55:39.727Z","end":"2022-07-28T20:55:39.846Z","steps":["trace[914151284] 'agreement among raft nodes before linearized reading' (duration: 117.890885ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.228Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"343.523664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" ","response":"range_response_count:1 size:3379"}
{"level":"info","ts":"2022-07-28T20:55:40.228Z","caller":"traceutil/trace.go:171","msg":"trace[1208480452] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:1; response_revision:399; }","duration":"343.641439ms","start":"2022-07-28T20:55:39.885Z","end":"2022-07-28T20:55:40.228Z","steps":["trace[1208480452] 'agreement among raft nodes before linearized reading' (duration: 88.612437ms)","trace[1208480452] 'range keys from in-memory index tree' (duration: 254.857308ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-28T20:55:39.885Z","time spent":"343.69039ms","remote":"127.0.0.1:36968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":1,"response size":3403,"request content":"key:\"/registry/clusterroles/edit\" "}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.954912ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289942175248618431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" value_size:568 lease:2289942175248618400 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[1197468206] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"341.914118ms","start":"2022-07-28T20:55:39.887Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[1197468206] 'process raft request' (duration: 86.499197ms)","trace[1197468206] 'compare' (duration: 254.731034ms)"],"step_count":2}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[1936646607] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"199.446321ms","start":"2022-07-28T20:55:40.029Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[1936646607] 'process raft request' (duration: 199.376927ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-07-28T20:55:39.887Z","time spent":"341.98956ms","remote":"127.0.0.1:36884","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":653,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-20220728205408-9843.170619aab69936e1\" value_size:568 lease:2289942175248618400 >> failure:<>"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[656430561] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"255.55912ms","start":"2022-07-28T20:55:39.973Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[656430561] 'read index received' (duration: 30.653813ms)","trace[656430561] 'applied index is now lower than readState.Index' (duration: 224.903883ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.229Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"289.890026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-28T20:55:40.229Z","caller":"traceutil/trace.go:171","msg":"trace[126516261] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"289.923404ms","start":"2022-07-28T20:55:39.939Z","end":"2022-07-28T20:55:40.229Z","steps":["trace[126516261] 'agreement among raft nodes before linearized reading' (duration: 289.872865ms)"],"step_count":1}
{"level":"warn","ts":"2022-07-28T20:55:40.234Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"167.548767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2022-07-28T20:55:40.234Z","caller":"traceutil/trace.go:171","msg":"trace[1321501223] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"167.615609ms","start":"2022-07-28T20:55:40.066Z","end":"2022-07-28T20:55:40.234Z","steps":["trace[1321501223] 'agreement among raft nodes before linearized reading' (duration: 167.512863ms)"],"step_count":1}
{"level":"info","ts":"2022-07-28T20:55:40.533Z","caller":"traceutil/trace.go:171","msg":"trace[1260603021] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"102.585942ms","start":"2022-07-28T20:55:40.431Z","end":"2022-07-28T20:55:40.533Z","steps":["trace[1260603021] 'read index received' (duration: 102.564734ms)","trace[1260603021] 'applied index is now lower than readState.Index' (duration: 19.453µs)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"275.136976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.178241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-20220728205408-9843.170619aaafec6f93\" ","response":"range_response_count:1 size:689"}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[556432168] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:404; }","duration":"275.266215ms","start":"2022-07-28T20:55:40.338Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[556432168] 'agreement among raft nodes before linearized reading' (duration: 195.563918ms)","trace[556432168] 'range keys from in-memory index tree' (duration: 79.552652ms)"],"step_count":2}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[732415796] range","detail":"{range_begin:/registry/events/default/pause-20220728205408-9843.170619aaafec6f93; range_end:; response_count:1; response_revision:404; }","duration":"274.226139ms","start":"2022-07-28T20:55:40.339Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[732415796] 'agreement among raft nodes before linearized reading' (duration: 194.427306ms)","trace[732415796] 'range keys from in-memory index tree' (duration: 79.720159ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:55:40.613Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.02147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:1 size:1932"}
{"level":"info","ts":"2022-07-28T20:55:40.613Z","caller":"traceutil/trace.go:171","msg":"trace[1033466391] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:404; }","duration":"179.303747ms","start":"2022-07-28T20:55:40.434Z","end":"2022-07-28T20:55:40.613Z","steps":["trace[1033466391] 'agreement among raft nodes before linearized reading' (duration: 99.441492ms)","trace[1033466391] 'range keys from in-memory index tree' (duration: 79.531252ms)"],"step_count":2}
{"level":"warn","ts":"2022-07-28T20:56:01.097Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.859587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
{"level":"info","ts":"2022-07-28T20:56:01.097Z","caller":"traceutil/trace.go:171","msg":"trace[1971741677] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:486; }","duration":"128.95508ms","start":"2022-07-28T20:56:00.968Z","end":"2022-07-28T20:56:01.097Z","steps":["trace[1971741677] 'agreement among raft nodes before linearized reading' (duration: 28.875339ms)","trace[1971741677] 'range keys from in-memory index tree' (duration: 99.940733ms)"],"step_count":2}
*
* ==> kernel <==
* 20:56:04 up 38 min, 0 users, load average: 8.00, 5.45, 2.82
Linux pause-20220728205408-9843 5.15.0-1013-gcp #18~20.04.1-Ubuntu SMP Sun Jul 3 08:20:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [1ade8458c928] <==
* I0728 20:55:38.586687 1 naming_controller.go:291] Starting NamingConditionController
I0728 20:55:38.586725 1 establishing_controller.go:76] Starting EstablishingController
I0728 20:55:38.586755 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0728 20:55:38.586771 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0728 20:55:38.586808 1 crd_finalizer.go:266] Starting CRDFinalizer
I0728 20:55:38.594251 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0728 20:55:38.656832 1 shared_informer.go:262] Caches are synced for node_authorizer
I0728 20:55:38.672358 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0728 20:55:38.672407 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0728 20:55:38.672442 1 cache.go:39] Caches are synced for autoregister controller
I0728 20:55:38.675076 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0728 20:55:38.677315 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0728 20:55:38.677334 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0728 20:55:38.827208 1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0728 20:55:38.829060 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0728 20:55:39.301220 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0728 20:55:39.725106 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0728 20:55:41.269651 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0728 20:55:41.279281 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0728 20:55:41.323131 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0728 20:55:41.350600 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0728 20:55:41.368045 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0728 20:55:41.374622 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0728 20:55:52.466665 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0728 20:55:52.578290 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [5046c6347079] <==
*
*
* ==> kube-controller-manager [4d6bd991be35] <==
* I0728 20:55:52.401551 1 shared_informer.go:262] Caches are synced for HPA
I0728 20:55:52.402670 1 shared_informer.go:262] Caches are synced for PVC protection
I0728 20:55:52.405849 1 shared_informer.go:262] Caches are synced for taint
I0728 20:55:52.405930 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
W0728 20:55:52.405991 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220728205408-9843. Assuming now as a timestamp.
I0728 20:55:52.405989 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0728 20:55:52.406102 1 event.go:294] "Event occurred" object="pause-20220728205408-9843" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220728205408-9843 event: Registered Node pause-20220728205408-9843 in Controller"
I0728 20:55:52.406122 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0728 20:55:52.408197 1 shared_informer.go:262] Caches are synced for ephemeral
I0728 20:55:52.409107 1 shared_informer.go:262] Caches are synced for deployment
I0728 20:55:52.421694 1 shared_informer.go:262] Caches are synced for TTL
I0728 20:55:52.421760 1 shared_informer.go:262] Caches are synced for namespace
I0728 20:55:52.421838 1 shared_informer.go:262] Caches are synced for PV protection
I0728 20:55:52.458733 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0728 20:55:52.497493 1 shared_informer.go:262] Caches are synced for TTL after finished
I0728 20:55:52.532942 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0728 20:55:52.536189 1 shared_informer.go:262] Caches are synced for resource quota
I0728 20:55:52.541597 1 shared_informer.go:262] Caches are synced for job
I0728 20:55:52.543779 1 shared_informer.go:262] Caches are synced for cronjob
I0728 20:55:52.569436 1 shared_informer.go:262] Caches are synced for endpoint
I0728 20:55:52.614150 1 shared_informer.go:262] Caches are synced for resource quota
I0728 20:55:52.638099 1 shared_informer.go:262] Caches are synced for attach detach
I0728 20:55:53.050038 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 20:55:53.064190 1 shared_informer.go:262] Caches are synced for garbage collector
I0728 20:55:53.064212 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [68f145b308b2] <==
*
*
* ==> kube-proxy [69e901f3a577] <==
*
*
* ==> kube-proxy [aac25227dad3] <==
* I0728 20:55:41.241812 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0728 20:55:41.241885 1 server_others.go:138] "Detected node IP" address="192.168.67.2"
I0728 20:55:41.241917 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0728 20:55:41.281911 1 server_others.go:206] "Using iptables Proxier"
I0728 20:55:41.281955 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0728 20:55:41.281964 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0728 20:55:41.281975 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0728 20:55:41.282004 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 20:55:41.282176 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0728 20:55:41.282440 1 server.go:661] "Version info" version="v1.24.3"
I0728 20:55:41.282463 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:55:41.313324 1 config.go:226] "Starting endpoint slice config controller"
I0728 20:55:41.313411 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0728 20:55:41.313473 1 config.go:317] "Starting service config controller"
I0728 20:55:41.313497 1 shared_informer.go:255] Waiting for caches to sync for service config
I0728 20:55:41.313569 1 config.go:444] "Starting node config controller"
I0728 20:55:41.313601 1 shared_informer.go:255] Waiting for caches to sync for node config
I0728 20:55:41.414434 1 shared_informer.go:262] Caches are synced for node config
I0728 20:55:41.414459 1 shared_informer.go:262] Caches are synced for service config
I0728 20:55:41.414488 1 shared_informer.go:262] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [3e228f51ae38] <==
* I0728 20:55:35.512759 1 serving.go:348] Generated self-signed cert in-memory
W0728 20:55:38.620149 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0728 20:55:38.620384 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0728 20:55:38.620483 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0728 20:55:38.620569 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0728 20:55:38.630051 1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0728 20:55:38.630079 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0728 20:55:38.631236 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0728 20:55:38.631279 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0728 20:55:38.631358 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0728 20:55:38.631844 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0728 20:55:38.732391 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [935c125c4397] <==
*
*
* ==> kubelet <==
* -- Logs begin at Thu 2022-07-28 20:54:19 UTC, end at Thu 2022-07-28 20:56:04 UTC. --
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: E0728 20:55:38.456451 5344 kubelet.go:2424] "Error getting node" err="node \"pause-20220728205408-9843\" not found"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: E0728 20:55:38.557302 5344 kubelet.go:2424] "Error getting node" err="node \"pause-20220728205408-9843\" not found"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.658327 5344 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.659018 5344 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.829563 5344 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220728205408-9843"
Jul 28 20:55:38 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:38.829681 5344 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220728205408-9843"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.445065 5344 apiserver.go:52] "Watching apiserver"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449222 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449351 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.449425 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563288 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4011009e-4b1d-4e94-9355-f2de01699705-kube-proxy\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563352 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpgs\" (UniqueName: \"kubernetes.io/projected/4011009e-4b1d-4e94-9355-f2de01699705-kube-api-access-7mpgs\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563388 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d00c451-82b6-48d5-beaa-bc5fa6f1a242-config-volume\") pod \"coredns-6d4b75cb6d-8z2tg\" (UID: \"7d00c451-82b6-48d5-beaa-bc5fa6f1a242\") " pod="kube-system/coredns-6d4b75cb6d-8z2tg"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563513 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb5fr\" (UniqueName: \"kubernetes.io/projected/7d00c451-82b6-48d5-beaa-bc5fa6f1a242-kube-api-access-xb5fr\") pod \"coredns-6d4b75cb6d-8z2tg\" (UID: \"7d00c451-82b6-48d5-beaa-bc5fa6f1a242\") " pod="kube-system/coredns-6d4b75cb6d-8z2tg"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563662 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4011009e-4b1d-4e94-9355-f2de01699705-xtables-lock\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563710 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4011009e-4b1d-4e94-9355-f2de01699705-lib-modules\") pod \"kube-proxy-bgdg9\" (UID: \"4011009e-4b1d-4e94-9355-f2de01699705\") " pod="kube-system/kube-proxy-bgdg9"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.563731 5344 reconciler.go:157] "Reconciler: start to sync state"
Jul 28 20:55:39 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:39.788639 5344 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bb68aeee-a86e-4279-89bc-6ed7151bdbf1 path="/var/lib/kubelet/pods/bb68aeee-a86e-4279-89bc-6ed7151bdbf1/volumes"
Jul 28 20:55:41 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:41.244504 5344 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="43eeb5ca36c34348d82248a73851d418ba9e36f96995d7fc6f0687a9c76d86d2"
Jul 28 20:55:43 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:43.274287 5344 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 28 20:55:46 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:46.627600 5344 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.844880 5344 topology_manager.go:200] "Topology Admit Handler"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.921218 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbaefc83-1444-4c06-837a-3e437bbb77f2-tmp\") pod \"storage-provisioner\" (UID: \"bbaefc83-1444-4c06-837a-3e437bbb77f2\") " pod="kube-system/storage-provisioner"
Jul 28 20:55:54 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:54.921288 5344 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8kw\" (UniqueName: \"kubernetes.io/projected/bbaefc83-1444-4c06-837a-3e437bbb77f2-kube-api-access-5w8kw\") pod \"storage-provisioner\" (UID: \"bbaefc83-1444-4c06-837a-3e437bbb77f2\") " pod="kube-system/storage-provisioner"
Jul 28 20:55:55 pause-20220728205408-9843 kubelet[5344]: I0728 20:55:55.384155 5344 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1c0b89f9f48bfa40fdc96cc0e2834f93eee33ab5cb93d5e930f68eb8b9905c6c"
*
* ==> storage-provisioner [54535d0f06bf] <==
* I0728 20:55:55.586652 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0728 20:55:55.596637 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0728 20:55:55.596678 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0728 20:55:55.619218 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0728 20:55:55.619423 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea!
I0728 20:55:55.619749 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a49a8a0b-b4bb-49f2-8733-af4df39f4401", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea became leader
I0728 20:55:55.720359 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220728205408-9843_efb9a403-38d1-478b-9a10-30c50a8884ea!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220728205408-9843 -n pause-20220728205408-9843
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220728205408-9843 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220728205408-9843 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220728205408-9843 describe pod : exit status 1 (51.017931ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220728205408-9843 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.94s)