=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-574316 --alsologtostderr -v=1 --driver=docker --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-574316 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (1m9.81177555s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-574316] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=16143
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on existing profile
* Starting control plane node pause-574316 in cluster pause-574316
* Pulling base image ...
* Updating the running docker "pause-574316" container ...
* Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
* Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0323 23:25:38.794014 401618 out.go:296] Setting OutFile to fd 1 ...
I0323 23:25:38.794225 401618 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:25:38.794240 401618 out.go:309] Setting ErrFile to fd 2...
I0323 23:25:38.794262 401618 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:25:38.794456 401618 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
I0323 23:25:38.795236 401618 out.go:303] Setting JSON to false
I0323 23:25:38.797716 401618 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7685,"bootTime":1679606254,"procs":1031,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0323 23:25:38.797813 401618 start.go:135] virtualization: kvm guest
I0323 23:25:38.801127 401618 out.go:177] * [pause-574316] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0323 23:25:38.803311 401618 out.go:177] - MINIKUBE_LOCATION=16143
I0323 23:25:38.803314 401618 notify.go:220] Checking for updates...
I0323 23:25:38.805210 401618 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0323 23:25:38.807168 401618 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:25:38.809028 401618 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
I0323 23:25:38.810723 401618 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0323 23:25:38.812214 401618 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0323 23:25:38.814210 401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:25:38.814624 401618 driver.go:365] Setting default libvirt URI to qemu:///system
I0323 23:25:38.898698 401618 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
I0323 23:25:38.898809 401618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:25:39.028887 401618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:25:39.019585714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:25:39.029046 401618 docker.go:294] overlay module found
I0323 23:25:39.031181 401618 out.go:177] * Using the docker driver based on existing profile
I0323 23:25:39.032527 401618 start.go:295] selected driver: docker
I0323 23:25:39.032544 401618 start.go:856] validating driver "docker" against &{Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-
provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0323 23:25:39.032682 401618 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0323 23:25:39.032779 401618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:25:39.165143 401618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2023-03-23 23:25:39.15591748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:25:39.165990 401618 cni.go:84] Creating CNI manager for ""
I0323 23:25:39.166023 401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0323 23:25:39.166041 401618 start_flags.go:319] config:
{Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] Custo
mAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0323 23:25:39.169386 401618 out.go:177] * Starting control plane node pause-574316 in cluster pause-574316
I0323 23:25:39.171375 401618 cache.go:120] Beginning downloading kic base image for docker with docker
I0323 23:25:39.173869 401618 out.go:177] * Pulling base image ...
I0323 23:25:39.175359 401618 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0323 23:25:39.175382 401618 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
I0323 23:25:39.175404 401618 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
I0323 23:25:39.175418 401618 cache.go:57] Caching tarball of preloaded images
I0323 23:25:39.175518 401618 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0323 23:25:39.175533 401618 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker
I0323 23:25:39.175661 401618 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/config.json ...
I0323 23:25:39.257225 401618 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
I0323 23:25:39.257257 401618 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
I0323 23:25:39.257281 401618 cache.go:193] Successfully downloaded all kic artifacts
I0323 23:25:39.257319 401618 start.go:364] acquiring machines lock for pause-574316: {Name:mk398c58b4397d996ea922b4a13a9404b26b4f2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0323 23:25:39.257463 401618 start.go:368] acquired machines lock for "pause-574316" in 91.492µs
I0323 23:25:39.257489 401618 start.go:96] Skipping create...Using existing machine configuration
I0323 23:25:39.257500 401618 fix.go:55] fixHost starting:
I0323 23:25:39.257789 401618 cli_runner.go:164] Run: docker container inspect pause-574316 --format={{.State.Status}}
I0323 23:25:39.351866 401618 fix.go:103] recreateIfNeeded on pause-574316: state=Running err=<nil>
W0323 23:25:39.351895 401618 fix.go:129] unexpected machine state, will restart: <nil>
I0323 23:25:39.354221 401618 out.go:177] * Updating the running docker "pause-574316" container ...
I0323 23:25:39.355859 401618 machine.go:88] provisioning docker machine ...
I0323 23:25:39.355899 401618 ubuntu.go:169] provisioning hostname "pause-574316"
I0323 23:25:39.355948 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:39.433008 401618 main.go:141] libmachine: Using SSH client type: native
I0323 23:25:39.433738 401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I0323 23:25:39.433769 401618 main.go:141] libmachine: About to run SSH command:
sudo hostname pause-574316 && echo "pause-574316" | sudo tee /etc/hostname
I0323 23:25:39.583955 401618 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-574316
I0323 23:25:39.584040 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:39.667029 401618 main.go:141] libmachine: Using SSH client type: native
I0323 23:25:39.667707 401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I0323 23:25:39.667745 401618 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-574316' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-574316/g' /etc/hosts;
else
echo '127.0.1.1 pause-574316' | sudo tee -a /etc/hosts;
fi
fi
I0323 23:25:39.809717 401618 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0323 23:25:39.809746 401618 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
I0323 23:25:39.809766 401618 ubuntu.go:177] setting up certificates
I0323 23:25:39.809775 401618 provision.go:83] configureAuth start
I0323 23:25:39.809825 401618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-574316
I0323 23:25:39.913040 401618 provision.go:138] copyHostCerts
I0323 23:25:39.913126 401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
I0323 23:25:39.913138 401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:25:39.913218 401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
I0323 23:25:39.913364 401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
I0323 23:25:39.913373 401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:25:39.913465 401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
I0323 23:25:39.913573 401618 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
I0323 23:25:39.913594 401618 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:25:39.913636 401618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
I0323 23:25:39.913752 401618 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.pause-574316 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-574316]
I0323 23:25:39.987714 401618 provision.go:172] copyRemoteCerts
I0323 23:25:39.987781 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0323 23:25:39.987815 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:40.075925 401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
I0323 23:25:40.186412 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0323 23:25:40.208578 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0323 23:25:40.227384 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0323 23:25:40.245282 401618 provision.go:86] duration metric: configureAuth took 435.487257ms
I0323 23:25:40.245311 401618 ubuntu.go:193] setting minikube options for container-runtime
I0323 23:25:40.245622 401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:25:40.245673 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:40.321283 401618 main.go:141] libmachine: Using SSH client type: native
I0323 23:25:40.321744 401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I0323 23:25:40.321760 401618 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0323 23:25:40.437792 401618 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0323 23:25:40.437825 401618 ubuntu.go:71] root file system type: overlay
I0323 23:25:40.437981 401618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0323 23:25:40.438064 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:40.517587 401618 main.go:141] libmachine: Using SSH client type: native
I0323 23:25:40.518003 401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I0323 23:25:40.518064 401618 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0323 23:25:40.642671 401618 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0323 23:25:40.642784 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:40.716391 401618 main.go:141] libmachine: Using SSH client type: native
I0323 23:25:40.716801 401618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I0323 23:25:40.716821 401618 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0323 23:25:40.837842 401618 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0323 23:25:40.837871 401618 machine.go:91] provisioned docker machine in 1.481993353s
I0323 23:25:40.837885 401618 start.go:300] post-start starting for "pause-574316" (driver="docker")
I0323 23:25:40.837894 401618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0323 23:25:40.837987 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0323 23:25:40.838048 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:40.912252 401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
I0323 23:25:40.997437 401618 ssh_runner.go:195] Run: cat /etc/os-release
I0323 23:25:41.000457 401618 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0323 23:25:41.000490 401618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0323 23:25:41.000504 401618 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0323 23:25:41.000512 401618 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0323 23:25:41.000522 401618 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
I0323 23:25:41.000592 401618 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
I0323 23:25:41.000702 401618 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
I0323 23:25:41.000829 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0323 23:25:41.008074 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
I0323 23:25:41.026484 401618 start.go:303] post-start completed in 188.579327ms
I0323 23:25:41.026573 401618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0323 23:25:41.026619 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:41.099088 401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
I0323 23:25:41.186783 401618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0323 23:25:41.191463 401618 fix.go:57] fixHost completed within 1.933951947s
I0323 23:25:41.191505 401618 start.go:83] releasing machines lock for "pause-574316", held for 1.934014729s
I0323 23:25:41.191587 401618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-574316
I0323 23:25:41.262808 401618 ssh_runner.go:195] Run: cat /version.json
I0323 23:25:41.262882 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:41.262888 401618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0323 23:25:41.262963 401618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-574316
I0323 23:25:41.340837 401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
I0323 23:25:41.348422 401618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/pause-574316/id_rsa Username:docker}
I0323 23:25:41.460233 401618 ssh_runner.go:195] Run: systemctl --version
I0323 23:25:41.464249 401618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0323 23:25:41.468128 401618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0323 23:25:41.484554 401618 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0323 23:25:41.484643 401618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0323 23:25:41.492572 401618 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0323 23:25:41.492615 401618 start.go:481] detecting cgroup driver to use...
I0323 23:25:41.492654 401618 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0323 23:25:41.492777 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0323 23:25:41.507375 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0323 23:25:41.516258 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0323 23:25:41.524309 401618 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0323 23:25:41.524358 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0323 23:25:41.532176 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0323 23:25:41.540045 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0323 23:25:41.548055 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0323 23:25:41.556320 401618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0323 23:25:41.563881 401618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0323 23:25:41.573263 401618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0323 23:25:41.581440 401618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0323 23:25:41.590053 401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:25:41.736075 401618 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0323 23:25:47.710149 401618 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (5.974028883s)
I0323 23:25:47.710184 401618 start.go:481] detecting cgroup driver to use...
I0323 23:25:47.710218 401618 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0323 23:25:47.710267 401618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0323 23:25:47.752166 401618 cruntime.go:276] skipping containerd shutdown because we are bound to it
I0323 23:25:47.752234 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0323 23:25:47.763692 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0323 23:25:47.781831 401618 ssh_runner.go:195] Run: which cri-dockerd
I0323 23:25:47.785130 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0323 23:25:47.793988 401618 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0323 23:25:47.859058 401618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0323 23:25:48.083211 401618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0323 23:25:48.318629 401618 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0323 23:25:48.318674 401618 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0323 23:25:48.362810 401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:25:48.491183 401618 ssh_runner.go:195] Run: sudo systemctl restart docker
I0323 23:25:49.242171 401618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0323 23:25:49.338611 401618 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0323 23:25:49.431988 401618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0323 23:25:49.518478 401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:25:49.606812 401618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0323 23:25:49.622196 401618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:25:49.768476 401618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0323 23:25:49.876462 401618 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0323 23:25:49.876547 401618 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0323 23:25:49.881148 401618 start.go:549] Will wait 60s for crictl version
I0323 23:25:49.881199 401618 ssh_runner.go:195] Run: which crictl
I0323 23:25:49.884031 401618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0323 23:25:49.920446 401618 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 23.0.1
RuntimeApiVersion: v1alpha2
I0323 23:25:49.920502 401618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0323 23:25:49.950337 401618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0323 23:25:49.978007 401618 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
I0323 23:25:49.978099 401618 cli_runner.go:164] Run: docker network inspect pause-574316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0323 23:25:50.055471 401618 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0323 23:25:50.059400 401618 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0323 23:25:50.059459 401618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0323 23:25:50.081828 401618 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0323 23:25:50.081859 401618 docker.go:569] Images already preloaded, skipping extraction
I0323 23:25:50.081951 401618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0323 23:25:50.105875 401618 docker.go:639] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0323 23:25:50.105904 401618 cache_images.go:84] Images are preloaded, skipping loading
I0323 23:25:50.105963 401618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0323 23:25:50.137759 401618 cni.go:84] Creating CNI manager for ""
I0323 23:25:50.137785 401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0323 23:25:50.137803 401618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0323 23:25:50.137818 401618 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-574316 NodeName:pause-574316 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0323 23:25:50.137971 401618 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-574316"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0323 23:25:50.138035 401618 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-574316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
[Install]
config:
{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0323 23:25:50.138081 401618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
I0323 23:25:50.146095 401618 binaries.go:44] Found k8s binaries, skipping transfer
I0323 23:25:50.146155 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0323 23:25:50.153058 401618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
I0323 23:25:50.166207 401618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0323 23:25:50.179847 401618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
I0323 23:25:50.195400 401618 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0323 23:25:50.199342 401618 certs.go:56] Setting up /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316 for IP: 192.168.67.2
I0323 23:25:50.199375 401618 certs.go:186] acquiring lock for shared ca certs: {Name:mkbfcc9ac63a4724ffa0206ecd1910ff6424bfdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:25:50.199577 401618 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.key
I0323 23:25:50.199630 401618 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.key
I0323 23:25:50.199720 401618 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key
I0323 23:25:50.199802 401618 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.key.c7fa3a9e
I0323 23:25:50.199862 401618 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.key
I0323 23:25:50.200017 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702.pem (1338 bytes)
W0323 23:25:50.200062 401618 certs.go:397] ignoring /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702_empty.pem, impossibly tiny 0 bytes
I0323 23:25:50.200076 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem (1679 bytes)
I0323 23:25:50.200113 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem (1078 bytes)
I0323 23:25:50.200149 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem (1123 bytes)
I0323 23:25:50.200179 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem (1675 bytes)
I0323 23:25:50.200238 401618 certs.go:401] found cert: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem (1708 bytes)
I0323 23:25:50.201014 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0323 23:25:50.221140 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0323 23:25:50.240905 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0323 23:25:50.260358 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0323 23:25:50.279345 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0323 23:25:50.299109 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0323 23:25:50.318589 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0323 23:25:50.336426 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0323 23:25:50.354322 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/68702.pem --> /usr/share/ca-certificates/68702.pem (1338 bytes)
I0323 23:25:50.370942 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /usr/share/ca-certificates/687022.pem (1708 bytes)
I0323 23:25:50.389751 401618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0323 23:25:50.409492 401618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0323 23:25:50.422518 401618 ssh_runner.go:195] Run: openssl version
I0323 23:25:50.428379 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0323 23:25:50.436420 401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0323 23:25:50.439511 401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 23 22:56 /usr/share/ca-certificates/minikubeCA.pem
I0323 23:25:50.439560 401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0323 23:25:50.444172 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0323 23:25:50.450869 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68702.pem && ln -fs /usr/share/ca-certificates/68702.pem /etc/ssl/certs/68702.pem"
I0323 23:25:50.458693 401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68702.pem
I0323 23:25:50.461629 401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 23 22:59 /usr/share/ca-certificates/68702.pem
I0323 23:25:50.461676 401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68702.pem
I0323 23:25:50.466163 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/68702.pem /etc/ssl/certs/51391683.0"
I0323 23:25:50.472629 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/687022.pem && ln -fs /usr/share/ca-certificates/687022.pem /etc/ssl/certs/687022.pem"
I0323 23:25:50.480396 401618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/687022.pem
I0323 23:25:50.483525 401618 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 23 22:59 /usr/share/ca-certificates/687022.pem
I0323 23:25:50.483566 401618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/687022.pem
I0323 23:25:50.488443 401618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/687022.pem /etc/ssl/certs/3ec20f2e.0"
I0323 23:25:50.495824 401618 kubeadm.go:401] StartCluster: {Name:pause-574316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:pause-574316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false st
orage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0323 23:25:50.496003 401618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0323 23:25:50.516228 401618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0323 23:25:50.523528 401618 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0323 23:25:50.523546 401618 kubeadm.go:633] restartCluster start
I0323 23:25:50.523593 401618 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0323 23:25:50.530367 401618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0323 23:25:50.531385 401618 kubeconfig.go:92] found "pause-574316" server: "https://192.168.67.2:8443"
I0323 23:25:50.533117 401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0323 23:25:50.534364 401618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0323 23:25:50.541456 401618 api_server.go:165] Checking apiserver status ...
I0323 23:25:50.541494 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0323 23:25:50.549669 401618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0323 23:25:51.050390 401618 api_server.go:165] Checking apiserver status ...
I0323 23:25:51.050468 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0323 23:25:51.064294 401618 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0323 23:25:51.550552 401618 api_server.go:165] Checking apiserver status ...
I0323 23:25:51.550627 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:25:51.561537 401618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6125/cgroup
I0323 23:25:51.570527 401618 api_server.go:181] apiserver freezer: "7:freezer:/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20"
I0323 23:25:51.570597 401618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20/freezer.state
I0323 23:25:51.578093 401618 api_server.go:203] freezer state: "THAWED"
I0323 23:25:51.578117 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:25:56.579297 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0323 23:25:56.579379 401618 retry.go:31] will retry after 281.453148ms: state is "Stopped"
I0323 23:25:56.861828 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:01.862609 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0323 23:26:01.862661 401618 retry.go:31] will retry after 338.872544ms: state is "Stopped"
I0323 23:26:02.202207 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:07.205679 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0323 23:26:07.205736 401618 api_server.go:165] Checking apiserver status ...
I0323 23:26:07.205792 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:26:07.219960 401618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6125/cgroup
I0323 23:26:07.242746 401618 api_server.go:181] apiserver freezer: "7:freezer:/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20"
I0323 23:26:07.242832 401618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/kubepods/burstable/pode3f7a1eab53ec8fb091240de98bc1524/6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20/freezer.state
I0323 23:26:07.252982 401618 api_server.go:203] freezer state: "THAWED"
I0323 23:26:07.253019 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:11.781791 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": read tcp 192.168.67.1:35480->192.168.67.2:8443: read: connection reset by peer
I0323 23:26:11.781859 401618 retry.go:31] will retry after 287.188822ms: state is "Stopped"
I0323 23:26:12.069213 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:12.069672 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:12.069713 401618 retry.go:31] will retry after 310.499489ms: state is "Stopped"
I0323 23:26:12.381213 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:12.381698 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:12.381745 401618 retry.go:31] will retry after 327.791373ms: state is "Stopped"
I0323 23:26:12.710265 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:12.710710 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:12.710752 401618 retry.go:31] will retry after 495.316645ms: state is "Stopped"
I0323 23:26:13.206372 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:13.206805 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:13.206850 401618 retry.go:31] will retry after 589.309728ms: state is "Stopped"
I0323 23:26:13.796739 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:13.797264 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:13.797314 401618 retry.go:31] will retry after 895.454418ms: state is "Stopped"
I0323 23:26:14.692919 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:14.693369 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:14.693431 401618 retry.go:31] will retry after 1.067586945s: state is "Stopped"
I0323 23:26:15.761447 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:15.761789 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:15.761829 401618 retry.go:31] will retry after 1.243332361s: state is "Stopped"
I0323 23:26:17.005481 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:17.005938 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:17.005984 401618 retry.go:31] will retry after 1.422748895s: state is "Stopped"
I0323 23:26:18.429483 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:18.429933 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:18.429980 401618 retry.go:31] will retry after 1.810935197s: state is "Stopped"
I0323 23:26:20.241489 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:20.241958 401618 api_server.go:268] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:20.242012 401618 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0323 23:26:20.242022 401618 kubeadm.go:1120] stopping kube-system containers ...
I0323 23:26:20.242169 401618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0323 23:26:20.273426 401618 docker.go:465] Stopping containers: [656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3]
I0323 23:26:20.273517 401618 ssh_runner.go:195] Run: docker stop 656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3
I0323 23:26:25.377843 401618 ssh_runner.go:235] Completed: docker stop 656b70fafbc2 2b7bc2ac835b 7ff3dcd747a3 d517e8e4d5d2 45416a5cd36b a9b1dc3910d9 6a198df97e4b 840b0c35d444 60c1dee0f178 80c388522552 f70a37494730 4b1c73f39f8c 7c4a71f1f0cd f2351c0cf203 7fed7e2ba6fe 9f27801249b0 b79bc8efd18f 52b133216226 933006561bf4 24f0fb4ace30 c71a79a234db c4b287ab62a2 f61f5c7340ec 03d421288ded 6da34435e995 f14a1f114c0b 8dd03effe021 37b991db5f35 b23873b32bda c5c0072529d3: (5.104279359s)
I0323 23:26:25.377931 401618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0323 23:26:25.436706 401618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0323 23:26:25.444321 401618 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Mar 23 23:25 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Mar 23 23:25 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 23 23:25 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5600 Mar 23 23:25 /etc/kubernetes/scheduler.conf
I0323 23:26:25.444385 401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0323 23:26:25.453011 401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0323 23:26:25.465602 401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0323 23:26:25.474584 401618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0323 23:26:25.474642 401618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0323 23:26:25.488216 401618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0323 23:26:25.500429 401618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0323 23:26:25.500488 401618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0323 23:26:25.509169 401618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0323 23:26:25.518649 401618 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0323 23:26:25.518678 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:25.622028 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:26.712524 401618 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090464066s)
I0323 23:26:26.712561 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:26.926415 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:27.015749 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:27.108937 401618 api_server.go:51] waiting for apiserver process to appear ...
I0323 23:26:27.109010 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:26:27.138818 401618 api_server.go:71] duration metric: took 29.877914ms to wait for apiserver process to appear ...
I0323 23:26:27.138852 401618 api_server.go:87] waiting for apiserver healthz status ...
I0323 23:26:27.138865 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:30.702650 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0323 23:26:30.702688 401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0323 23:26:31.203341 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:31.209521 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0323 23:26:31.209555 401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0323 23:26:31.703040 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:31.711962 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0323 23:26:31.711995 401618 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0323 23:26:32.203410 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:32.209766 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0323 23:26:32.218798 401618 api_server.go:140] control plane version: v1.26.3
I0323 23:26:32.218829 401618 api_server.go:130] duration metric: took 5.079969007s to wait for apiserver health ...
I0323 23:26:32.218847 401618 cni.go:84] Creating CNI manager for ""
I0323 23:26:32.218863 401618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0323 23:26:32.221258 401618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0323 23:26:32.223621 401618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0323 23:26:32.233200 401618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (565 bytes)
I0323 23:26:32.247230 401618 system_pods.go:43] waiting for kube-system pods to appear ...
I0323 23:26:32.258573 401618 system_pods.go:59] 7 kube-system pods found
I0323 23:26:32.258599 401618 system_pods.go:61] "coredns-787d4945fb-2sw8v" [05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014] Running
I0323 23:26:32.258608 401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0323 23:26:32.258615 401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:32.258619 401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:32.258624 401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:32.258629 401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:32.258633 401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:32.258638 401618 system_pods.go:74] duration metric: took 11.390377ms to wait for pod list to return data ...
I0323 23:26:32.258647 401618 node_conditions.go:102] verifying NodePressure condition ...
I0323 23:26:32.262117 401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0323 23:26:32.262137 401618 node_conditions.go:123] node cpu capacity is 8
I0323 23:26:32.262149 401618 node_conditions.go:105] duration metric: took 3.492134ms to run NodePressure ...
I0323 23:26:32.262169 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0323 23:26:32.577460 401618 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0323 23:26:32.582902 401618 retry.go:31] will retry after 323.760518ms: kubelet not initialised
I0323 23:26:32.911740 401618 kubeadm.go:784] kubelet initialised
I0323 23:26:32.911768 401618 kubeadm.go:785] duration metric: took 334.279613ms waiting for restarted kubelet to initialise ...
I0323 23:26:32.911781 401618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:32.917306 401618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace to be "Ready" ...
I0323 23:26:32.923785 401618 pod_ready.go:92] pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:32.923807 401618 pod_ready.go:81] duration metric: took 6.468377ms waiting for pod "coredns-787d4945fb-2sw8v" in "kube-system" namespace to be "Ready" ...
I0323 23:26:32.923819 401618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:34.935968 401618 pod_ready.go:102] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:37.435598 401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:37.435626 401618 pod_ready.go:81] duration metric: took 4.511800496s waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:37.435639 401618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:39.446424 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:41.447016 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:43.946954 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:44.447057 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.447087 401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.447102 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452104 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.452122 401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452131 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154244 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.154286 401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154300 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161861 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.161889 401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161903 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166566 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.166596 401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166605 401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.166630 401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0323 23:26:45.174654 401618 ops.go:34] apiserver oom_adj: -16
I0323 23:26:45.174677 401618 kubeadm.go:637] restartCluster took 54.651125652s
I0323 23:26:45.174685 401618 kubeadm.go:403] StartCluster complete in 54.678873105s
I0323 23:26:45.174705 401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.174775 401618 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:26:45.175905 401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.213579 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0323 23:26:45.213933 401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:45.213472 401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0323 23:26:45.214148 401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0323 23:26:45.414715 401618 out.go:177] * Enabled addons:
I0323 23:26:45.217242 401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
I0323 23:26:45.430053 401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
I0323 23:26:45.430069 401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0323 23:26:45.436198 401618 out.go:177] * Verifying Kubernetes components...
I0323 23:26:45.436358 401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0323 23:26:45.446881 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:45.460908 401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463792 401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
I0323 23:26:45.463814 401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463823 401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.468648 401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645139 401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.645160 401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645170 401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045231 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.045260 401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045274 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444173 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.444194 401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444204 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844571 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.844592 401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844602 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244514 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.244538 401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244548 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644184 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.644203 401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644210 401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:47.644231 401618 api_server.go:51] waiting for apiserver process to appear ...
I0323 23:26:47.644265 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:26:47.660462 401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
I0323 23:26:47.660489 401618 api_server.go:87] waiting for apiserver healthz status ...
I0323 23:26:47.660508 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:47.667464 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0323 23:26:47.668285 401618 api_server.go:140] control plane version: v1.26.3
I0323 23:26:47.668303 401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
I0323 23:26:47.668310 401618 system_pods.go:43] waiting for kube-system pods to appear ...
I0323 23:26:47.847116 401618 system_pods.go:59] 6 kube-system pods found
I0323 23:26:47.847153 401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:47.847161 401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:47.847168 401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:47.847175 401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:47.847181 401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:47.847187 401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:47.847193 401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
I0323 23:26:47.847201 401618 default_sa.go:34] waiting for default service account to be created ...
I0323 23:26:48.044586 401618 default_sa.go:45] found service account: "default"
I0323 23:26:48.044616 401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
I0323 23:26:48.044630 401618 system_pods.go:116] waiting for k8s-apps to be running ...
I0323 23:26:48.247931 401618 system_pods.go:86] 6 kube-system pods found
I0323 23:26:48.247963 401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:48.247974 401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:48.247980 401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:48.247986 401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:48.247991 401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:48.247999 401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:48.248007 401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
I0323 23:26:48.248015 401618 system_svc.go:44] waiting for kubelet service to be running ....
I0323 23:26:48.248065 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:48.258927 401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
I0323 23:26:48.258954 401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0323 23:26:48.258976 401618 node_conditions.go:102] verifying NodePressure condition ...
I0323 23:26:48.449583 401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0323 23:26:48.449608 401618 node_conditions.go:123] node cpu capacity is 8
I0323 23:26:48.449620 401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
I0323 23:26:48.449633 401618 start.go:228] waiting for startup goroutines ...
I0323 23:26:48.449641 401618 start.go:233] waiting for cluster config update ...
I0323 23:26:48.449652 401618 start.go:242] writing updated cluster config ...
I0323 23:26:48.450019 401618 ssh_runner.go:195] Run: rm -f paused
I0323 23:26:48.534780 401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0323 23:26:48.538018 401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-574316
helpers_test.go:235: (dbg) docker inspect pause-574316:
-- stdout --
[
{
"Id": "973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb",
"Created": "2023-03-23T23:25:04.583396388Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 390898,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-03-23T23:25:05.007909282Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
"ResolvConfPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hostname",
"HostsPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hosts",
"LogPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb-json.log",
"Name": "/pause-574316",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-574316:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "pause-574316",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47-init/diff:/var/lib/docker/overlay2/d356d443959743e8c5ec1e688b0ccaccd2483fd24991ca327095d1ea51dadd79/diff:/var/lib/docker/overlay2/dd1855d68604dc5432757610d41f6488e2cf65b7ade63d0ac4dd50e3cb700545/diff:/var/lib/docker/overlay2/3ae5a9ac34ca4f4036f376d3f7ee1e6d806107b6ba140eee2af2df3088fe2af4/diff:/var/lib/docker/overlay2/a88a7a03b1dddb065d2da925165770d1982de0fb6388d7798dec4a6c996388ed/diff:/var/lib/docker/overlay2/11e0cdbbdfb5d84e0d99a3d4a7693f825097d37baa31784b182606407b254347/diff:/var/lib/docker/overlay2/f3679d076f087c60feb261250bae0ef050d7ed7a8876697b61f4e74260ac5c25/diff:/var/lib/docker/overlay2/3a9213ab7d98194272e65090b79370f92e0fed3b68466ca89c2fce6cc06bee37/diff:/var/lib/docker/overlay2/c7e7b51e4ed37e163c31a7a2769a396f00a3a46bbe043bb3d74144e3d7dbdf4b/diff:/var/lib/docker/overlay2/a5a37da3c24f5ba9b69245b491d59fa7f875d4bf22ab2d3b4fe2e0480245836e/diff:/var/lib/docker/overlay2/f36025
f30104b76500045a0755939ab273914eecce2e91f0541c32de5325546f/diff:/var/lib/docker/overlay2/ef9ccd83ee71ed9d46782a820551dbda8865609796f631a741766fab9be9c04b/diff:/var/lib/docker/overlay2/e105b68b5b16f55e25547056d8ce228bdac36d93107fd4a3a78c8b026fbe0140/diff:/var/lib/docker/overlay2/75ca52704ffd583bb6fbed231278a5c352311cb4dee88f8b731377a47cdf43cd/diff:/var/lib/docker/overlay2/70a153c20f330aaea42285756d01aeb9a3e45e8909ea0b266c7d189438588e4b/diff:/var/lib/docker/overlay2/e07683b025df1da95650fadc2612b6df0024b6d4ab531cf439bb426bb94dd7c6/diff:/var/lib/docker/overlay2/a9c09db98b0de89a8bd85bb42c47585ec8dd924dfea9913e0e1e581771cb76db/diff:/var/lib/docker/overlay2/467577b0b0b8cb64beff8ef36e7da084fb7cddcdea88ced35ada883720038870/diff:/var/lib/docker/overlay2/89ecada524594426b58db802e9a64eff841e5a0dda6609f65ba80c77dc71866e/diff:/var/lib/docker/overlay2/d2e226af46510168fcd51d532ca7a03e77c9d9eb5253b85afd78b26e7b839180/diff:/var/lib/docker/overlay2/e7c1552e27888c5d4d72be70f7b4614ac96872e390e99ad721f043fa28cdc212/diff:/var/lib/d
ocker/overlay2/3074211fc4276144c82302477aac25cc2363357462b8212747bf9a6abdb179b8/diff:/var/lib/docker/overlay2/2f0eed0a121e12185ea49a07f0a026b7cd3add1c64e943d8f00609db9cb06035/diff:/var/lib/docker/overlay2/efa9237fe1d3ed78c6d7939b6d7a46778b6c3851395039e00da7e7ba1c07743d/diff:/var/lib/docker/overlay2/0ca055233446f0ea58f8b702a09b991f77ae9c6f1a338762761848f3a4b12d4e/diff:/var/lib/docker/overlay2/aa7036e406ea8fcd3317c56097ff3b2227796276b2a8ab2f3f7103fed4dfa3b5/diff:/var/lib/docker/overlay2/2f3123bc47bc73bed1b1f7f75675e13e493ca4c8e4f5c4cb662aae58d9373cca/diff:/var/lib/docker/overlay2/1275037c371fbe052f7ca3e9c640764633c72ba9f3d6954b012d34cae8b5d69d/diff:/var/lib/docker/overlay2/7b9c1ddebbcba2b26d07bd7fba9c0fd87ce195be38c2a75f219ac7de57f85b3f/diff:/var/lib/docker/overlay2/2b39bb0f285174bfa621ed101af05ba3552825ab700a73135af1e8b8d7f0bb81/diff:/var/lib/docker/overlay2/643ab8ec872c6defa175401a06dd4a300105c4061619e41059a39a3ee35e3d40/diff:/var/lib/docker/overlay2/713ee57325a771a6a041c255726b832978f929eb1147c72212d96dd7dde
734b2/diff:/var/lib/docker/overlay2/19c1f1f71db682b75e904ad1c7d909f372d24486542012874e578917dc9a9bdf/diff:/var/lib/docker/overlay2/d26fed6403eddd78cf74be1d4a1f4012e1edccb465491f947e4746d92cebcd56/diff:/var/lib/docker/overlay2/0086cdc0bd9c0e4bd086d59a3944cac9d08674d00c80fa77d1f9faa935a5fb19/diff:/var/lib/docker/overlay2/9e14b9f084a1ea7826ee394f169e32a19b56fa135bde5da69486094355c778bb/diff:/var/lib/docker/overlay2/92af9bb2d1b59e9a45cd00af02a78ed7edab34388b268ad30cf749708e273ee8/diff:/var/lib/docker/overlay2/b13dcd677cb58d34d216059052299c900b1728fe3d46ae29cdf0f9a6991696ac/diff:/var/lib/docker/overlay2/30ba19dfbdf89b50aa26fe1695664407f059e1a354830d1d0363128794c81c8f/diff:/var/lib/docker/overlay2/0a91cb0450bc46b302d1b3518574e94a65ab366928b7b67d4dd446e682a14338/diff:/var/lib/docker/overlay2/0b3c4aae10bf80ea7c918fa052ad5ed468c2ebe01aa2f0658bc20304d1f6b07e/diff:/var/lib/docker/overlay2/9602ed727f176a29d28ed2d2045ad3c93f4ec63578399744c69db3d3057f1ed7/diff:/var/lib/docker/overlay2/33399f037b75aa41b061c2f9330cd6f041c290
9051f6ad5b09141a0346202db9/diff",
"MergedDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/merged",
"UpperDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/diff",
"WorkDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-574316",
"Source": "/var/lib/docker/volumes/pause-574316/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-574316",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-574316",
"name.minikube.sigs.k8s.io": "pause-574316",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "20.04",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "83727ed535e639dbb7b60a28c289ec43475eb83a2bfc731da6a7d8b3710be5ba",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32989"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32988"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32985"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32987"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32986"
}
]
},
"SandboxKey": "/var/run/docker/netns/83727ed535e6",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-574316": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"973cf0ca8459",
"pause-574316"
],
"NetworkID": "2400bfbdd9cf00f3450521e73ae0be02c2bb9e5678c8bce35f9e0dc4ced8fa23",
"EndpointID": "1af4d5eb5080f4897840d3dd79c7fcfc8ac3d8dcb7665dd57389ff515a84a05e",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-574316 -n pause-574316
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-574316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-574316 logs -n 25: (1.222950807s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p cilium-452361 sudo docker | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | system info | | | | | |
| start | -p force-systemd-env-286741 | force-systemd-env-286741 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | cri-dockerd --version | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-452361 sudo find | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-452361 sudo crio | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | config | | | | | |
| delete | -p cilium-452361 | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | 23 Mar 23 23:26 UTC |
| start | -p old-k8s-version-063647 | old-k8s-version-063647 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/23 23:26:40
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0323 23:26:40.042149 428061 out.go:296] Setting OutFile to fd 1 ...
I0323 23:26:40.042248 428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:26:40.042257 428061 out.go:309] Setting ErrFile to fd 2...
I0323 23:26:40.042261 428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:26:40.042366 428061 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
I0323 23:26:40.042954 428061 out.go:303] Setting JSON to false
I0323 23:26:40.047193 428061 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7746,"bootTime":1679606254,"procs":1211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0323 23:26:40.047254 428061 start.go:135] virtualization: kvm guest
I0323 23:26:40.049796 428061 out.go:177] * [old-k8s-version-063647] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0323 23:26:40.051284 428061 out.go:177] - MINIKUBE_LOCATION=16143
I0323 23:26:40.051309 428061 notify.go:220] Checking for updates...
I0323 23:26:40.052905 428061 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0323 23:26:40.054785 428061 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:26:40.056430 428061 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
I0323 23:26:40.058083 428061 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0323 23:26:40.059646 428061 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0323 23:26:40.061783 428061 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:40.061882 428061 config.go:182] Loaded profile config "kubernetes-upgrade-120624": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-beta.0
I0323 23:26:40.062033 428061 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:40.062098 428061 driver.go:365] Setting default libvirt URI to qemu:///system
I0323 23:26:40.147368 428061 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
I0323 23:26:40.147472 428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:26:40.295961 428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:26:40.275708441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:26:40.296057 428061 docker.go:294] overlay module found
I0323 23:26:40.298752 428061 out.go:177] * Using the docker driver based on user configuration
I0323 23:26:40.300448 428061 start.go:295] selected driver: docker
I0323 23:26:40.300468 428061 start.go:856] validating driver "docker" against <nil>
I0323 23:26:40.300482 428061 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0323 23:26:40.301339 428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:26:40.438182 428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-23 23:26:40.428586758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:26:40.438301 428061 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0323 23:26:40.438509 428061 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0323 23:26:40.441248 428061 out.go:177] * Using Docker driver with root privileges
I0323 23:26:40.442932 428061 cni.go:84] Creating CNI manager for ""
I0323 23:26:40.442974 428061 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0323 23:26:40.442984 428061 start_flags.go:319] config:
{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0323 23:26:40.444845 428061 out.go:177] * Starting control plane node old-k8s-version-063647 in cluster old-k8s-version-063647
I0323 23:26:40.446536 428061 cache.go:120] Beginning downloading kic base image for docker with docker
I0323 23:26:40.448053 428061 out.go:177] * Pulling base image ...
I0323 23:26:40.449652 428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0323 23:26:40.449683 428061 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
I0323 23:26:40.449703 428061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
I0323 23:26:40.449720 428061 cache.go:57] Caching tarball of preloaded images
I0323 23:26:40.449803 428061 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0323 23:26:40.449814 428061 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker
I0323 23:26:40.449923 428061 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json ...
I0323 23:26:40.449948 428061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json: {Name:mkd269866aecb4e0ebd7c80fae44792dc2e78f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:40.540045 428061 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
I0323 23:26:40.540081 428061 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
I0323 23:26:40.540105 428061 cache.go:193] Successfully downloaded all kic artifacts
I0323 23:26:40.540144 428061 start.go:364] acquiring machines lock for old-k8s-version-063647: {Name:mk836ec8f4a8439e66a7c2c2dcb6074efc06d654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0323 23:26:40.540267 428061 start.go:368] acquired machines lock for "old-k8s-version-063647" in 98.708µs
I0323 23:26:40.540298 428061 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0323 23:26:40.540420 428061 start.go:125] createHost starting for "" (driver="docker")
I0323 23:26:37.666420 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:37.666756 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:37.915164 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:37.934415 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:37.934495 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:37.954816 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:37.954881 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:37.973222 360910 logs.go:277] 0 containers: []
W0323 23:26:37.973245 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:37.973298 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:37.992640 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:37.992731 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:38.012097 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:38.012179 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:38.030328 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:38.030409 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:38.048993 360910 logs.go:277] 0 containers: []
W0323 23:26:38.049024 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:38.049080 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:38.068667 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:38.068707 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:38.068722 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:38.127007 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:38.127040 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:38.127056 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:38.147666 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:38.147691 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:38.168212 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:38.168249 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:38.197795 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:38.197836 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:38.243949 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:38.243989 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:38.264103 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:38.264130 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:38.288660 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:38.288696 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:38.363370 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:38.363403 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:38.386060 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:38.386089 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:38.418791 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:38.418815 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:38.548713 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:38.548764 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:38.579492 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:38.579537 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:38.618692 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:38.618721 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:41.155209 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:41.155664 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:41.415055 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:41.434873 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:41.434945 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:41.455006 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:41.455077 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:41.472882 360910 logs.go:277] 0 containers: []
W0323 23:26:41.472906 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:41.472950 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:41.491292 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:41.491390 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:39.446424 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:41.447016 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:39.280123 427158 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0323 23:26:39.280357 427158 start.go:159] libmachine.API.Create for "force-systemd-env-286741" (driver="docker")
I0323 23:26:39.280387 427158 client.go:168] LocalClient.Create starting
I0323 23:26:39.280458 427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
I0323 23:26:39.280507 427158 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:39.280530 427158 main.go:141] libmachine: Parsing certificate...
I0323 23:26:39.280594 427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
I0323 23:26:39.280623 427158 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:39.280640 427158 main.go:141] libmachine: Parsing certificate...
I0323 23:26:39.280974 427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0323 23:26:39.354615 427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0323 23:26:39.354704 427158 network_create.go:281] running [docker network inspect force-systemd-env-286741] to gather additional debugging logs...
I0323 23:26:39.354728 427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741
W0323 23:26:39.425557 427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 returned with exit code 1
I0323 23:26:39.425596 427158 network_create.go:284] error running [docker network inspect force-systemd-env-286741]: docker network inspect force-systemd-env-286741: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-env-286741 not found
I0323 23:26:39.425628 427158 network_create.go:286] output of [docker network inspect force-systemd-env-286741]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-env-286741 not found
** /stderr **
I0323 23:26:39.425680 427158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0323 23:26:39.503698 427158 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
I0323 23:26:39.504676 427158 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
I0323 23:26:39.505710 427158 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
I0323 23:26:39.506685 427158 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
I0323 23:26:39.507885 427158 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00175a3d0}
I0323 23:26:39.507923 427158 network_create.go:123] attempt to create docker network force-systemd-env-286741 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0323 23:26:39.507984 427158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-286741 force-systemd-env-286741
I0323 23:26:39.624494 427158 network_create.go:107] docker network force-systemd-env-286741 192.168.85.0/24 created
I0323 23:26:39.624528 427158 kic.go:117] calculated static IP "192.168.85.2" for the "force-systemd-env-286741" container
I0323 23:26:39.624580 427158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0323 23:26:39.699198 427158 cli_runner.go:164] Run: docker volume create force-systemd-env-286741 --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true
I0323 23:26:39.772552 427158 oci.go:103] Successfully created a docker volume force-systemd-env-286741
I0323 23:26:39.772640 427158 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-286741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --entrypoint /usr/bin/test -v force-systemd-env-286741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
I0323 23:26:40.396101 427158 oci.go:107] Successfully prepared a docker volume force-systemd-env-286741
I0323 23:26:40.396169 427158 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0323 23:26:40.396201 427158 kic.go:190] Starting extracting preloaded images to volume ...
I0323 23:26:40.396283 427158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
I0323 23:26:43.652059 427158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.255698579s)
I0323 23:26:43.652098 427158 kic.go:199] duration metric: took 3.255892 seconds to extract preloaded images to volume
W0323 23:26:43.652249 427158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0323 23:26:43.652340 427158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0323 23:26:43.788292 427158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-286741 --name force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-286741 --network force-systemd-env-286741 --ip 192.168.85.2 --volume force-systemd-env-286741:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
I0323 23:26:40.542931 428061 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0323 23:26:40.543143 428061 start.go:159] libmachine.API.Create for "old-k8s-version-063647" (driver="docker")
I0323 23:26:40.543161 428061 client.go:168] LocalClient.Create starting
I0323 23:26:40.543233 428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
I0323 23:26:40.543267 428061 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:40.543291 428061 main.go:141] libmachine: Parsing certificate...
I0323 23:26:40.543363 428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
I0323 23:26:40.543394 428061 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:40.543409 428061 main.go:141] libmachine: Parsing certificate...
I0323 23:26:40.543830 428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0323 23:26:40.622688 428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0323 23:26:40.622796 428061 network_create.go:281] running [docker network inspect old-k8s-version-063647] to gather additional debugging logs...
I0323 23:26:40.622825 428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647
W0323 23:26:40.691850 428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 returned with exit code 1
I0323 23:26:40.691881 428061 network_create.go:284] error running [docker network inspect old-k8s-version-063647]: docker network inspect old-k8s-version-063647: exit status 1
stdout:
[]
stderr:
Error response from daemon: network old-k8s-version-063647 not found
I0323 23:26:40.691895 428061 network_create.go:286] output of [docker network inspect old-k8s-version-063647]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network old-k8s-version-063647 not found
** /stderr **
I0323 23:26:40.691971 428061 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0323 23:26:40.769117 428061 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
I0323 23:26:40.769965 428061 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
I0323 23:26:40.770928 428061 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
I0323 23:26:40.771945 428061 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
I0323 23:26:40.773155 428061 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f79741dc633b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:e0:82:cf:7a} reservation:<nil>}
I0323 23:26:40.774473 428061 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e36b0}
I0323 23:26:40.774511 428061 network_create.go:123] attempt to create docker network old-k8s-version-063647 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I0323 23:26:40.774584 428061 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-063647 old-k8s-version-063647
I0323 23:26:40.898151 428061 network_create.go:107] docker network old-k8s-version-063647 192.168.94.0/24 created
I0323 23:26:40.898189 428061 kic.go:117] calculated static IP "192.168.94.2" for the "old-k8s-version-063647" container
I0323 23:26:40.898268 428061 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0323 23:26:40.974566 428061 cli_runner.go:164] Run: docker volume create old-k8s-version-063647 --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true
I0323 23:26:41.045122 428061 oci.go:103] Successfully created a docker volume old-k8s-version-063647
I0323 23:26:41.045212 428061 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
I0323 23:26:44.069733 428061 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib: (3.024480313s)
I0323 23:26:44.069768 428061 oci.go:107] Successfully prepared a docker volume old-k8s-version-063647
I0323 23:26:44.069781 428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0323 23:26:44.069803 428061 kic.go:190] Starting extracting preloaded images to volume ...
I0323 23:26:44.069874 428061 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
I0323 23:26:43.946954 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:44.447057 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.447087 401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.447102 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452104 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.452122 401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452131 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154244 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.154286 401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154300 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161861 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.161889 401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161903 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166566 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.166596 401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166605 401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.166630 401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0323 23:26:45.174654 401618 ops.go:34] apiserver oom_adj: -16
I0323 23:26:45.174677 401618 kubeadm.go:637] restartCluster took 54.651125652s
I0323 23:26:45.174685 401618 kubeadm.go:403] StartCluster complete in 54.678873105s
I0323 23:26:45.174705 401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.174775 401618 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:26:45.175905 401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.213579 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0323 23:26:45.213933 401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:45.213472 401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0323 23:26:45.214148 401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0323 23:26:45.414715 401618 out.go:177] * Enabled addons:
I0323 23:26:45.217242 401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
I0323 23:26:45.430053 401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
I0323 23:26:45.430069 401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0323 23:26:45.436198 401618 out.go:177] * Verifying Kubernetes components...
I0323 23:26:41.512784 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:41.580770 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:41.604486 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:41.604573 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:41.623789 360910 logs.go:277] 0 containers: []
W0323 23:26:41.623821 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:41.623896 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:41.644226 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:41.644272 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:41.644288 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:41.748676 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:41.748714 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:41.768332 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:41.768367 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:41.792311 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:41.792341 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:41.830521 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:41.830556 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:41.860609 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:41.860650 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:41.932251 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:41.932290 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:41.963057 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:41.963098 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:41.993699 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:41.993742 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:42.025209 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:42.025243 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:42.056243 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:42.056283 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:42.128632 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:42.128657 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:42.128672 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:42.163262 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:42.163298 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:42.188287 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:42.188316 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:44.714609 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:44.715050 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:44.915428 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:44.936310 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:44.936415 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:44.957324 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:44.957387 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:44.980654 360910 logs.go:277] 0 containers: []
W0323 23:26:44.980682 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:44.980734 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:45.003148 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:45.003234 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:45.022249 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:45.022323 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:45.040205 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:45.040282 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:45.057312 360910 logs.go:277] 0 containers: []
W0323 23:26:45.057337 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:45.057385 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:45.080434 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:45.080479 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:45.080495 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:45.104865 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:45.104918 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:45.133666 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:45.133710 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:45.162931 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:45.162970 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:45.202791 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:45.202825 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:45.244277 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:45.244379 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:45.282659 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:45.282742 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:45.313254 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:45.313334 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:45.336545 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:45.336594 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:45.377128 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:45.377170 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:45.514087 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:45.514205 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:45.592082 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:45.592121 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:45.619139 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:45.619172 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:45.678335 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:45.678389 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:45.678404 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:45.436358 401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0323 23:26:45.446881 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:45.460908 401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463792 401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
I0323 23:26:45.463814 401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463823 401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.468648 401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645139 401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.645160 401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645170 401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045231 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.045260 401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045274 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444173 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.444194 401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444204 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844571 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.844592 401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844602 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244514 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.244538 401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244548 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644184 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.644203 401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644210 401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:47.644231 401618 api_server.go:51] waiting for apiserver process to appear ...
I0323 23:26:47.644265 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:26:47.660462 401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
I0323 23:26:47.660489 401618 api_server.go:87] waiting for apiserver healthz status ...
I0323 23:26:47.660508 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:47.667464 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0323 23:26:47.668285 401618 api_server.go:140] control plane version: v1.26.3
I0323 23:26:47.668303 401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
I0323 23:26:47.668310 401618 system_pods.go:43] waiting for kube-system pods to appear ...
I0323 23:26:47.847116 401618 system_pods.go:59] 6 kube-system pods found
I0323 23:26:47.847153 401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:47.847161 401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:47.847168 401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:47.847175 401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:47.847181 401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:47.847187 401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:47.847193 401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
I0323 23:26:47.847201 401618 default_sa.go:34] waiting for default service account to be created ...
I0323 23:26:48.044586 401618 default_sa.go:45] found service account: "default"
I0323 23:26:48.044616 401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
I0323 23:26:48.044630 401618 system_pods.go:116] waiting for k8s-apps to be running ...
I0323 23:26:48.247931 401618 system_pods.go:86] 6 kube-system pods found
I0323 23:26:48.247963 401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:48.247974 401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:48.247980 401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:48.247986 401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:48.247991 401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:48.247999 401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:48.248007 401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
I0323 23:26:48.248015 401618 system_svc.go:44] waiting for kubelet service to be running ....
I0323 23:26:48.248065 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:48.258927 401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
I0323 23:26:48.258954 401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0323 23:26:48.258976 401618 node_conditions.go:102] verifying NodePressure condition ...
I0323 23:26:48.449583 401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0323 23:26:48.449608 401618 node_conditions.go:123] node cpu capacity is 8
I0323 23:26:48.449620 401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
I0323 23:26:48.449633 401618 start.go:228] waiting for startup goroutines ...
I0323 23:26:48.449641 401618 start.go:233] waiting for cluster config update ...
I0323 23:26:48.449652 401618 start.go:242] writing updated cluster config ...
I0323 23:26:48.450019 401618 ssh_runner.go:195] Run: rm -f paused
I0323 23:26:48.534780 401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0323 23:26:48.538018 401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
I0323 23:26:44.308331 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Running}}
I0323 23:26:44.394439 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:44.471392 427158 cli_runner.go:164] Run: docker exec force-systemd-env-286741 stat /var/lib/dpkg/alternatives/iptables
I0323 23:26:44.603293 427158 oci.go:144] the created container "force-systemd-env-286741" has a running status.
I0323 23:26:44.603330 427158 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa...
I0323 23:26:44.920036 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0323 23:26:44.920082 427158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0323 23:26:45.161321 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:45.251141 427158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0323 23:26:45.251176 427158 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-286741 chown docker:docker /home/docker/.ssh/authorized_keys]
I0323 23:26:45.400052 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:45.485912 427158 machine.go:88] provisioning docker machine ...
I0323 23:26:45.485973 427158 ubuntu.go:169] provisioning hostname "force-systemd-env-286741"
I0323 23:26:45.486046 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:45.565967 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:45.566601 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:45.566627 427158 main.go:141] libmachine: About to run SSH command:
sudo hostname force-systemd-env-286741 && echo "force-systemd-env-286741" | sudo tee /etc/hostname
I0323 23:26:45.780316 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-286741
I0323 23:26:45.780413 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:45.856411 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:45.857051 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:45.857097 427158 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-env-286741' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-286741/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-env-286741' | sudo tee -a /etc/hosts;
fi
fi
I0323 23:26:45.977892 427158 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0323 23:26:45.977934 427158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
I0323 23:26:45.977978 427158 ubuntu.go:177] setting up certificates
I0323 23:26:45.977996 427158 provision.go:83] configureAuth start
I0323 23:26:45.978074 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:46.057572 427158 provision.go:138] copyHostCerts
I0323 23:26:46.057625 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:26:46.057666 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
I0323 23:26:46.057678 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:26:46.057752 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
I0323 23:26:46.057846 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:26:46.057875 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
I0323 23:26:46.057885 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:26:46.057920 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
I0323 23:26:46.057987 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:26:46.058014 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
I0323 23:26:46.058025 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:26:46.058056 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
I0323 23:26:46.058133 427158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-286741 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-286741]
I0323 23:26:46.508497 427158 provision.go:172] copyRemoteCerts
I0323 23:26:46.508591 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0323 23:26:46.508655 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.583159 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:46.668948 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0323 23:26:46.669009 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0323 23:26:46.687152 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem -> /etc/docker/server.pem
I0323 23:26:46.687222 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
I0323 23:26:46.706760 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0323 23:26:46.706834 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0323 23:26:46.724180 427158 provision.go:86] duration metric: configureAuth took 746.155987ms
I0323 23:26:46.724211 427158 ubuntu.go:193] setting minikube options for container-runtime
I0323 23:26:46.724415 427158 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:46.724478 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.793992 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:46.794421 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:46.794437 427158 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0323 23:26:46.909667 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0323 23:26:46.909696 427158 ubuntu.go:71] root file system type: overlay
I0323 23:26:46.909827 427158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0323 23:26:46.909896 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.979665 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:46.980533 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:46.980649 427158 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0323 23:26:47.134741 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0323 23:26:47.134814 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:47.203471 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:47.203895 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:47.203914 427158 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0323 23:26:47.958910 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-03-23 23:26:47.129506351 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0323 23:26:47.958955 427158 machine.go:91] provisioned docker machine in 2.473006765s
I0323 23:26:47.958969 427158 client.go:171] LocalClient.Create took 8.678571965s
I0323 23:26:47.958985 427158 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-286741" took 8.67862836s
I0323 23:26:47.959002 427158 start.go:300] post-start starting for "force-systemd-env-286741" (driver="docker")
I0323 23:26:47.959010 427158 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0323 23:26:47.959086 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0323 23:26:47.959133 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.039006 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.138241 427158 ssh_runner.go:195] Run: cat /etc/os-release
I0323 23:26:48.141753 427158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0323 23:26:48.141790 427158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0323 23:26:48.141804 427158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0323 23:26:48.141812 427158 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0323 23:26:48.141823 427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
I0323 23:26:48.141882 427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
I0323 23:26:48.141972 427158 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
I0323 23:26:48.141981 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> /etc/ssl/certs/687022.pem
I0323 23:26:48.142083 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0323 23:26:48.149479 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
I0323 23:26:48.170718 427158 start.go:303] post-start completed in 211.698395ms
I0323 23:26:48.171159 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:48.255406 427158 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/force-systemd-env-286741/config.json ...
I0323 23:26:48.255709 427158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0323 23:26:48.255768 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.348731 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.444848 427158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0323 23:26:48.454096 427158 start.go:128] duration metric: createHost completed in 9.176760391s
I0323 23:26:48.454122 427158 start.go:83] releasing machines lock for "force-systemd-env-286741", held for 9.176923746s
I0323 23:26:48.454203 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:48.544171 427158 ssh_runner.go:195] Run: cat /version.json
I0323 23:26:48.544227 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.544232 427158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0323 23:26:48.544306 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.702573 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.713344 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.792996 427158 ssh_runner.go:195] Run: systemctl --version
*
* ==> Docker <==
* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:50 UTC. --
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002500928Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002674094Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002709828Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.003286601Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.025479889Z" level=info msg="Loading containers: start."
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.172830226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.214010134Z" level=info msg="Loading containers: done."
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225800214Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225888560Z" level=info msg="Daemon has completed initialization"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.240113456Z" level=info msg="[core] [Server #7] Server created" module=grpc
Mar 23 23:25:49 pause-574316 systemd[1]: Started Docker Application Container Engine.
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.246358737Z" level=info msg="API listen on [::]:2376"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.256115277Z" level=info msg="API listen on /var/run/docker.sock"
Mar 23 23:26:11 pause-574316 dockerd[5186]: time="2023-03-23T23:26:11.796102440Z" level=info msg="ignoring event" container=6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.145302003Z" level=info msg="ignoring event" container=45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.379532489Z" level=info msg="ignoring event" container=60c1dee0f1786db1b413aa688e7a57acd71e6c18979e95b21131d3496a98cad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.392985764Z" level=info msg="ignoring event" container=840b0c35d4448d1362a7bc020e0fac35331ad72438dfc00e79685e0baca6b11b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.453179245Z" level=info msg="ignoring event" container=656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.457378879Z" level=info msg="ignoring event" container=f70a37494730e3c42d183c94cd69472a7f672f61f330f75482164f78d4eda989 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.459285840Z" level=info msg="ignoring event" container=2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460667173Z" level=info msg="ignoring event" container=d517e8e4d5d2dbd1822c028a0de7f091686d0e0657198f93573dd122ee6485a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460699514Z" level=info msg="ignoring event" container=4b1c73f39f8c07193f987da6a6d6784c9f87cb43caa7ea5f424e367b0f2e27e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.467741307Z" level=info msg="ignoring event" container=80c388522552702a89135b09d2d073b9c57d1fbc851a0a89b0cec032be049f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.471167750Z" level=info msg="ignoring event" container=7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:25 pause-574316 dockerd[5186]: time="2023-03-23T23:26:25.347736368Z" level=info msg="ignoring event" container=a9b1dc3910d9b5195bfff4b0d6cedbf54b214159654d4e23645c839bf053ad23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0f0398bddb511 5185b96f0becf 18 seconds ago Running coredns 3 542477f9c5e1d
43a8930300a57 92ed2bec97a63 18 seconds ago Running kube-proxy 2 28a061395dad5
e7cd8ca7c7242 5a79047369329 23 seconds ago Running kube-scheduler 3 4c131416edb23
f946ab43717f1 ce8c2293ef09c 23 seconds ago Running kube-controller-manager 3 3ca9ec9bef2c4
1137111a33d08 fce326961ae2d 23 seconds ago Running etcd 3 f4e9af6f99313
cea7ca7eb9ad0 1d9b3cbae03ce 28 seconds ago Running kube-apiserver 2 f84cdf335e887
656b70fafbc2b fce326961ae2d 39 seconds ago Exited etcd 2 60c1dee0f1786
2b7bc2ac835be 5a79047369329 50 seconds ago Exited kube-scheduler 2 4b1c73f39f8c0
7ff3dcd747a3b 92ed2bec97a63 51 seconds ago Exited kube-proxy 1 d517e8e4d5d2d
45416a5cd36b4 ce8c2293ef09c 51 seconds ago Exited kube-controller-manager 2 f70a37494730e
a9b1dc3910d9b 5185b96f0becf 59 seconds ago Exited coredns 2 840b0c35d4448
6a198df97e4bd 1d9b3cbae03ce 59 seconds ago Exited kube-apiserver 1 80c3885225527
*
* ==> coredns [0f0398bddb51] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:52573 - 39862 "HINFO IN 4074527240347548607.320685648437704123. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.037884079s
*
* ==> coredns [a9b1dc3910d9] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:45219 - 2821 "HINFO IN 6139167459808748397.3590652508084774261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035135004s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> describe nodes <==
* Name: pause-574316
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-574316
kubernetes.io/os=linux
minikube.k8s.io/commit=e9478c9159ab3ccef5e7f933edc25c8da75bed69
minikube.k8s.io/name=pause-574316
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_23T23_25_21_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Mar 2023 23:25:18 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-574316
AcquireTime: <unset>
RenewTime: Thu, 23 Mar 2023 23:26:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-574316
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: b249c14bbd9147e887f6315aff00ef06
System UUID: 7bdff168-7cdd-493c-bdda-f1cc26739b6e
Boot ID: 9d192f19-d9f5-4df3-a502-4030f2da5343
Kernel Version: 5.15.0-1030-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://23.0.1
Kubelet Version: v1.26.3
Kube-Proxy Version: v1.26.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-lljqk 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 76s
kube-system etcd-pause-574316 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 88s
kube-system kube-apiserver-pause-574316 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
kube-system kube-controller-manager-pause-574316 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-proxy-lnk2t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 77s
kube-system kube-scheduler-pause-574316 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 75s kube-proxy
Normal Starting 17s kube-proxy
Normal NodeHasSufficientPID 96s (x3 over 96s) kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 96s (x4 over 96s) kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 96s (x4 over 96s) kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal Starting 89s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 89s kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 89s kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 89s kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeNotReady 89s kubelet Node pause-574316 status is now: NodeNotReady
Normal NodeAllocatableEnforced 89s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 78s kubelet Node pause-574316 status is now: NodeReady
Normal RegisteredNode 77s node-controller Node pause-574316 event: Registered Node pause-574316 in Controller
Normal Starting 23s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 23s (x8 over 23s) kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x8 over 23s) kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x7 over 23s) kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 23s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7s node-controller Node pause-574316 event: Registered Node pause-574316 in Controller
*
* ==> dmesg <==
* [ +0.000619] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff da 9a 31 26 91 58 08 06
[ +46.489619] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff de 03 7b bf b1 b8 08 06
[Mar23 23:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 06 3d f3 17 47 08 06
[Mar23 23:21] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
[ +0.437885] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
[Mar23 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 9e 53 5f 42 99 08 06
[Mar23 23:23] process 'docker/tmp/qemu-check941714971/check' started with executable stack
[ +9.389883] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e f3 36 2c c1 cd 08 06
[Mar23 23:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae cb 28 07 13 77 08 06
[ +0.012995] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 0c 92 4c a9 1c 08 06
[ +15.547404] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 10 ab 83 31 f9 08 06
[Mar23 23:26] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 81 ad 5c b9 08 06
[ +5.887427] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 6b a8 e3 05 d7 08 06
*
* ==> etcd [1137111a33d0] <==
* {"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-23T23:26:29.060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1088553463] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"187.629875ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1088553463] 'read index received' (duration: 113.126176ms)","trace[1088553463] 'applied index is now lower than readState.Index' (duration: 74.502878ms)"],"step_count":2}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1657399943] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"197.637334ms","start":"2023-03-23T23:26:44.948Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1657399943] 'process raft request' (duration: 123.099553ms)","trace[1657399943] 'compare' (duration: 74.347233ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-23T23:26:45.146Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.827176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-574316\" ","response":"range_response_count:1 size:6942"}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[666014890] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-574316; range_end:; response_count:1; response_revision:463; }","duration":"187.950429ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[666014890] 'agreement among raft nodes before linearized reading' (duration: 187.770048ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-23T23:26:45.429Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.41564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
{"level":"info","ts":"2023-03-23T23:26:45.429Z","caller":"traceutil/trace.go:171","msg":"trace[1689761979] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:463; }","duration":"133.510104ms","start":"2023-03-23T23:26:45.295Z","end":"2023-03-23T23:26:45.429Z","steps":["trace[1689761979] 'range keys from in-memory index tree' (duration: 133.250873ms)"],"step_count":1}
*
* ==> etcd [656b70fafbc2] <==
* {"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
{"level":"info","ts":"2023-03-23T23:26:20.380Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
{"level":"info","ts":"2023-03-23T23:26:20.382Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
*
* ==> kernel <==
* 23:26:50 up 2:09, 0 users, load average: 5.27, 4.14, 2.82
Linux pause-574316 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [6a198df97e4b] <==
* W0323 23:26:08.603014 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0323 23:26:09.405661 1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0323 23:26:09.657900 1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0323 23:26:11.774251 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [cea7ca7eb9ad] <==
* I0323 23:26:30.648351 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0323 23:26:30.648430 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0323 23:26:30.684300 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0323 23:26:30.639853 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0323 23:26:30.639867 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0323 23:26:30.639933 1 autoregister_controller.go:141] Starting autoregister controller
I0323 23:26:30.690081 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0323 23:26:30.690161 1 cache.go:39] Caches are synced for autoregister controller
I0323 23:26:30.701389 1 shared_informer.go:280] Caches are synced for node_authorizer
I0323 23:26:30.750507 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0323 23:26:30.750975 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0323 23:26:30.752373 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0323 23:26:30.752385 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0323 23:26:30.752497 1 shared_informer.go:280] Caches are synced for configmaps
I0323 23:26:30.753570 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0323 23:26:30.753615 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0323 23:26:31.339987 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0323 23:26:31.646840 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0323 23:26:32.375391 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0323 23:26:32.388141 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0323 23:26:32.474747 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0323 23:26:32.557448 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0323 23:26:32.566478 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0323 23:26:43.845098 1 controller.go:615] quota admission added evaluator for: endpoints
I0323 23:26:43.899216 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [45416a5cd36b] <==
* I0323 23:25:59.829591 1 serving.go:348] Generated self-signed cert in-memory
I0323 23:26:00.084118 1 controllermanager.go:182] Version: v1.26.3
I0323 23:26:00.084152 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:00.085310 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0323 23:26:00.085306 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0323 23:26:00.085554 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0323 23:26:00.085646 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
F0323 23:26:20.087377 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
*
* ==> kube-controller-manager [f946ab43717f] <==
* I0323 23:26:43.682858 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0323 23:26:43.685481 1 shared_informer.go:280] Caches are synced for GC
I0323 23:26:43.691799 1 shared_informer.go:280] Caches are synced for HPA
I0323 23:26:43.691846 1 shared_informer.go:280] Caches are synced for daemon sets
I0323 23:26:43.691921 1 shared_informer.go:280] Caches are synced for PVC protection
I0323 23:26:43.691962 1 shared_informer.go:280] Caches are synced for endpoint
I0323 23:26:43.692814 1 shared_informer.go:280] Caches are synced for ephemeral
I0323 23:26:43.692841 1 shared_informer.go:280] Caches are synced for cronjob
I0323 23:26:43.692907 1 shared_informer.go:280] Caches are synced for service account
I0323 23:26:43.696646 1 shared_informer.go:280] Caches are synced for taint
I0323 23:26:43.696746 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
I0323 23:26:43.696779 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
W0323 23:26:43.696843 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-574316. Assuming now as a timestamp.
I0323 23:26:43.696884 1 taint_manager.go:211] "Sending events to api server"
I0323 23:26:43.696913 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0323 23:26:43.697076 1 event.go:294] "Event occurred" object="pause-574316" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-574316 event: Registered Node pause-574316 in Controller"
I0323 23:26:43.698625 1 shared_informer.go:280] Caches are synced for crt configmap
I0323 23:26:43.701545 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0323 23:26:43.740889 1 shared_informer.go:280] Caches are synced for attach detach
I0323 23:26:43.792552 1 shared_informer.go:280] Caches are synced for disruption
I0323 23:26:43.821372 1 shared_informer.go:280] Caches are synced for resource quota
I0323 23:26:43.894489 1 shared_informer.go:280] Caches are synced for resource quota
I0323 23:26:44.210014 1 shared_informer.go:280] Caches are synced for garbage collector
I0323 23:26:44.229157 1 shared_informer.go:280] Caches are synced for garbage collector
I0323 23:26:44.229247 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [43a8930300a5] <==
* I0323 23:26:32.502821 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0323 23:26:32.502919 1 server_others.go:109] "Detected node IP" address="192.168.67.2"
I0323 23:26:32.503040 1 server_others.go:535] "Using iptables proxy"
I0323 23:26:32.581352 1 server_others.go:176] "Using iptables Proxier"
I0323 23:26:32.581492 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0323 23:26:32.581507 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0323 23:26:32.581525 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0323 23:26:32.581580 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0323 23:26:32.582126 1 server.go:655] "Version info" version="v1.26.3"
I0323 23:26:32.582166 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:32.582886 1 config.go:226] "Starting endpoint slice config controller"
I0323 23:26:32.583504 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0323 23:26:32.583082 1 config.go:317] "Starting service config controller"
I0323 23:26:32.583523 1 shared_informer.go:273] Waiting for caches to sync for service config
I0323 23:26:32.583137 1 config.go:444] "Starting node config controller"
I0323 23:26:32.583545 1 shared_informer.go:273] Waiting for caches to sync for node config
I0323 23:26:32.684533 1 shared_informer.go:280] Caches are synced for service config
I0323 23:26:32.684613 1 shared_informer.go:280] Caches are synced for node config
I0323 23:26:32.684623 1 shared_informer.go:280] Caches are synced for endpoint slice config
*
* ==> kube-proxy [7ff3dcd747a3] <==
* E0323 23:26:09.977748 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": net/http: TLS handshake timeout
E0323 23:26:12.783360 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.67.2:39882->192.168.67.2:8443: read: connection reset by peer
E0323 23:26:14.853949 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:18.965897 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
*
* ==> kube-scheduler [2b7bc2ac835b] <==
* W0323 23:26:16.679162 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.679200 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:16.812219 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.812268 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:16.846940 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.846981 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:17.007369 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:17.007406 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.575702 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.575741 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.775890 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.775937 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.850977 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.851021 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.060721 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.060762 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.080470 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.080525 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.208535 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.208595 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:20.353988 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0323 23:26:20.354103 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0323 23:26:20.354167 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:20.354182 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0323 23:26:20.354209 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [e7cd8ca7c724] <==
* I0323 23:26:28.403386 1 serving.go:348] Generated self-signed cert in-memory
I0323 23:26:30.771476 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
I0323 23:26:30.771503 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:30.778353 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0323 23:26:30.778381 1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
I0323 23:26:30.778428 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0323 23:26:30.778441 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:30.778478 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0323 23:26:30.778489 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0323 23:26:30.779761 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0323 23:26:30.784753 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0323 23:26:30.878975 1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
I0323 23:26:30.879041 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:30.878980 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:50 UTC. --
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.503080 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bcc950c7983e1395e2f1091ca3b040-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-574316\" (UID: \"16bcc950c7983e1395e2f1091ca3b040\") " pod="kube-system/kube-controller-manager-pause-574316"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.748833 7640 scope.go:115] "RemoveContainer" containerID="656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.763712 7640 scope.go:115] "RemoveContainer" containerID="45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.773578 7640 scope.go:115] "RemoveContainer" containerID="2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.818789 7640 kubelet_node_status.go:108] "Node was previously registered" node="pause-574316"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.819442 7640 kubelet_node_status.go:73] "Successfully registered node" node="pause-574316"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.821124 7640 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.827327 7640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.062727 7640 apiserver.go:52] "Watching apiserver"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069251 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069369 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069450 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.098738 7640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160848 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzp5\" (UniqueName: \"kubernetes.io/projected/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-api-access-kxzp5\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160919 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm5m\" (UniqueName: \"kubernetes.io/projected/ce593e1c-39de-4a21-994e-157f74ab568e-kube-api-access-8wm5m\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160966 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-lib-modules\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161002 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce593e1c-39de-4a21-994e-157f74ab568e-config-volume\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161027 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-proxy\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161059 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-xtables-lock\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161088 7640 reconciler.go:41] "Reconciler: start to sync state"
Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.271414 7640 scope.go:115] "RemoveContainer" containerID="7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9"
Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.700707 7640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542477f9c5e1de564352e093d277e29ea04f9ada02cdebe4924d534ea2be3623"
Mar 23 23:26:34 pause-574316 kubelet[7640]: I0323 23:26:34.734860 7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Mar 23 23:26:35 pause-574316 kubelet[7640]: I0323 23:26:35.343216 7640 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014 path="/var/lib/kubelet/pods/05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014/volumes"
Mar 23 23:26:37 pause-574316 kubelet[7640]: I0323 23:26:37.006845 7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-574316 -n pause-574316
helpers_test.go:261: (dbg) Run: kubectl --context pause-574316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-574316
helpers_test.go:235: (dbg) docker inspect pause-574316:
-- stdout --
[
{
"Id": "973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb",
"Created": "2023-03-23T23:25:04.583396388Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 390898,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-03-23T23:25:05.007909282Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
"ResolvConfPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hostname",
"HostsPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/hosts",
"LogPath": "/var/lib/docker/containers/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb/973cf0ca8459b8f5817b5ac522a54d72c66bd2d7c8e9e9db609121f92754b9fb-json.log",
"Name": "/pause-574316",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-574316:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "pause-574316",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47-init/diff:/var/lib/docker/overlay2/d356d443959743e8c5ec1e688b0ccaccd2483fd24991ca327095d1ea51dadd79/diff:/var/lib/docker/overlay2/dd1855d68604dc5432757610d41f6488e2cf65b7ade63d0ac4dd50e3cb700545/diff:/var/lib/docker/overlay2/3ae5a9ac34ca4f4036f376d3f7ee1e6d806107b6ba140eee2af2df3088fe2af4/diff:/var/lib/docker/overlay2/a88a7a03b1dddb065d2da925165770d1982de0fb6388d7798dec4a6c996388ed/diff:/var/lib/docker/overlay2/11e0cdbbdfb5d84e0d99a3d4a7693f825097d37baa31784b182606407b254347/diff:/var/lib/docker/overlay2/f3679d076f087c60feb261250bae0ef050d7ed7a8876697b61f4e74260ac5c25/diff:/var/lib/docker/overlay2/3a9213ab7d98194272e65090b79370f92e0fed3b68466ca89c2fce6cc06bee37/diff:/var/lib/docker/overlay2/c7e7b51e4ed37e163c31a7a2769a396f00a3a46bbe043bb3d74144e3d7dbdf4b/diff:/var/lib/docker/overlay2/a5a37da3c24f5ba9b69245b491d59fa7f875d4bf22ab2d3b4fe2e0480245836e/diff:/var/lib/docker/overlay2/f36025
f30104b76500045a0755939ab273914eecce2e91f0541c32de5325546f/diff:/var/lib/docker/overlay2/ef9ccd83ee71ed9d46782a820551dbda8865609796f631a741766fab9be9c04b/diff:/var/lib/docker/overlay2/e105b68b5b16f55e25547056d8ce228bdac36d93107fd4a3a78c8b026fbe0140/diff:/var/lib/docker/overlay2/75ca52704ffd583bb6fbed231278a5c352311cb4dee88f8b731377a47cdf43cd/diff:/var/lib/docker/overlay2/70a153c20f330aaea42285756d01aeb9a3e45e8909ea0b266c7d189438588e4b/diff:/var/lib/docker/overlay2/e07683b025df1da95650fadc2612b6df0024b6d4ab531cf439bb426bb94dd7c6/diff:/var/lib/docker/overlay2/a9c09db98b0de89a8bd85bb42c47585ec8dd924dfea9913e0e1e581771cb76db/diff:/var/lib/docker/overlay2/467577b0b0b8cb64beff8ef36e7da084fb7cddcdea88ced35ada883720038870/diff:/var/lib/docker/overlay2/89ecada524594426b58db802e9a64eff841e5a0dda6609f65ba80c77dc71866e/diff:/var/lib/docker/overlay2/d2e226af46510168fcd51d532ca7a03e77c9d9eb5253b85afd78b26e7b839180/diff:/var/lib/docker/overlay2/e7c1552e27888c5d4d72be70f7b4614ac96872e390e99ad721f043fa28cdc212/diff:/var/lib/d
ocker/overlay2/3074211fc4276144c82302477aac25cc2363357462b8212747bf9a6abdb179b8/diff:/var/lib/docker/overlay2/2f0eed0a121e12185ea49a07f0a026b7cd3add1c64e943d8f00609db9cb06035/diff:/var/lib/docker/overlay2/efa9237fe1d3ed78c6d7939b6d7a46778b6c3851395039e00da7e7ba1c07743d/diff:/var/lib/docker/overlay2/0ca055233446f0ea58f8b702a09b991f77ae9c6f1a338762761848f3a4b12d4e/diff:/var/lib/docker/overlay2/aa7036e406ea8fcd3317c56097ff3b2227796276b2a8ab2f3f7103fed4dfa3b5/diff:/var/lib/docker/overlay2/2f3123bc47bc73bed1b1f7f75675e13e493ca4c8e4f5c4cb662aae58d9373cca/diff:/var/lib/docker/overlay2/1275037c371fbe052f7ca3e9c640764633c72ba9f3d6954b012d34cae8b5d69d/diff:/var/lib/docker/overlay2/7b9c1ddebbcba2b26d07bd7fba9c0fd87ce195be38c2a75f219ac7de57f85b3f/diff:/var/lib/docker/overlay2/2b39bb0f285174bfa621ed101af05ba3552825ab700a73135af1e8b8d7f0bb81/diff:/var/lib/docker/overlay2/643ab8ec872c6defa175401a06dd4a300105c4061619e41059a39a3ee35e3d40/diff:/var/lib/docker/overlay2/713ee57325a771a6a041c255726b832978f929eb1147c72212d96dd7dde
734b2/diff:/var/lib/docker/overlay2/19c1f1f71db682b75e904ad1c7d909f372d24486542012874e578917dc9a9bdf/diff:/var/lib/docker/overlay2/d26fed6403eddd78cf74be1d4a1f4012e1edccb465491f947e4746d92cebcd56/diff:/var/lib/docker/overlay2/0086cdc0bd9c0e4bd086d59a3944cac9d08674d00c80fa77d1f9faa935a5fb19/diff:/var/lib/docker/overlay2/9e14b9f084a1ea7826ee394f169e32a19b56fa135bde5da69486094355c778bb/diff:/var/lib/docker/overlay2/92af9bb2d1b59e9a45cd00af02a78ed7edab34388b268ad30cf749708e273ee8/diff:/var/lib/docker/overlay2/b13dcd677cb58d34d216059052299c900b1728fe3d46ae29cdf0f9a6991696ac/diff:/var/lib/docker/overlay2/30ba19dfbdf89b50aa26fe1695664407f059e1a354830d1d0363128794c81c8f/diff:/var/lib/docker/overlay2/0a91cb0450bc46b302d1b3518574e94a65ab366928b7b67d4dd446e682a14338/diff:/var/lib/docker/overlay2/0b3c4aae10bf80ea7c918fa052ad5ed468c2ebe01aa2f0658bc20304d1f6b07e/diff:/var/lib/docker/overlay2/9602ed727f176a29d28ed2d2045ad3c93f4ec63578399744c69db3d3057f1ed7/diff:/var/lib/docker/overlay2/33399f037b75aa41b061c2f9330cd6f041c290
9051f6ad5b09141a0346202db9/diff",
"MergedDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/merged",
"UpperDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/diff",
"WorkDir": "/var/lib/docker/overlay2/c54747c4abf8ec81cf7111f4ae0a9bdf3546a835b741fa9b4946c2cef7bb7c47/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-574316",
"Source": "/var/lib/docker/volumes/pause-574316/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-574316",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-574316",
"name.minikube.sigs.k8s.io": "pause-574316",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "20.04",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "83727ed535e639dbb7b60a28c289ec43475eb83a2bfc731da6a7d8b3710be5ba",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32989"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32988"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32985"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32987"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32986"
}
]
},
"SandboxKey": "/var/run/docker/netns/83727ed535e6",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-574316": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"973cf0ca8459",
"pause-574316"
],
"NetworkID": "2400bfbdd9cf00f3450521e73ae0be02c2bb9e5678c8bce35f9e0dc4ced8fa23",
"EndpointID": "1af4d5eb5080f4897840d3dd79c7fcfc8ac3d8dcb7665dd57389ff515a84a05e",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-574316 -n pause-574316
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-574316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-574316 logs -n 25: (1.280032379s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/docker/daemon.json | | | | | |
| ssh | -p cilium-452361 sudo docker | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | system info | | | | | |
| start | -p force-systemd-env-286741 | force-systemd-env-286741 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | cri-dockerd --version | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-452361 sudo cat | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-452361 sudo | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-452361 sudo find | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-452361 sudo crio | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | config | | | | | |
| delete | -p cilium-452361 | cilium-452361 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | 23 Mar 23 23:26 UTC |
| start | -p old-k8s-version-063647 | old-k8s-version-063647 | jenkins | v1.29.0 | 23 Mar 23 23:26 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/23 23:26:40
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0323 23:26:40.042149 428061 out.go:296] Setting OutFile to fd 1 ...
I0323 23:26:40.042248 428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:26:40.042257 428061 out.go:309] Setting ErrFile to fd 2...
I0323 23:26:40.042261 428061 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0323 23:26:40.042366 428061 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16143-62012/.minikube/bin
I0323 23:26:40.042954 428061 out.go:303] Setting JSON to false
I0323 23:26:40.047193 428061 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7746,"bootTime":1679606254,"procs":1211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0323 23:26:40.047254 428061 start.go:135] virtualization: kvm guest
I0323 23:26:40.049796 428061 out.go:177] * [old-k8s-version-063647] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0323 23:26:40.051284 428061 out.go:177] - MINIKUBE_LOCATION=16143
I0323 23:26:40.051309 428061 notify.go:220] Checking for updates...
I0323 23:26:40.052905 428061 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0323 23:26:40.054785 428061 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:26:40.056430 428061 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16143-62012/.minikube
I0323 23:26:40.058083 428061 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0323 23:26:40.059646 428061 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0323 23:26:40.061783 428061 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:40.061882 428061 config.go:182] Loaded profile config "kubernetes-upgrade-120624": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-beta.0
I0323 23:26:40.062033 428061 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:40.062098 428061 driver.go:365] Setting default libvirt URI to qemu:///system
I0323 23:26:40.147368 428061 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
I0323 23:26:40.147472 428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:26:40.295961 428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:47 SystemTime:2023-03-23 23:26:40.275708441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:26:40.296057 428061 docker.go:294] overlay module found
I0323 23:26:40.298752 428061 out.go:177] * Using the docker driver based on user configuration
I0323 23:26:40.300448 428061 start.go:295] selected driver: docker
I0323 23:26:40.300468 428061 start.go:856] validating driver "docker" against <nil>
I0323 23:26:40.300482 428061 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0323 23:26:40.301339 428061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0323 23:26:40.438182 428061 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-23 23:26:40.428586758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0323 23:26:40.438301 428061 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0323 23:26:40.438509 428061 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0323 23:26:40.441248 428061 out.go:177] * Using Docker driver with root privileges
I0323 23:26:40.442932 428061 cni.go:84] Creating CNI manager for ""
I0323 23:26:40.442974 428061 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0323 23:26:40.442984 428061 start_flags.go:319] config:
{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0323 23:26:40.444845 428061 out.go:177] * Starting control plane node old-k8s-version-063647 in cluster old-k8s-version-063647
I0323 23:26:40.446536 428061 cache.go:120] Beginning downloading kic base image for docker with docker
I0323 23:26:40.448053 428061 out.go:177] * Pulling base image ...
I0323 23:26:40.449652 428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0323 23:26:40.449683 428061 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
I0323 23:26:40.449703 428061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
I0323 23:26:40.449720 428061 cache.go:57] Caching tarball of preloaded images
I0323 23:26:40.449803 428061 preload.go:174] Found /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0323 23:26:40.449814 428061 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker
I0323 23:26:40.449923 428061 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json ...
I0323 23:26:40.449948 428061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/old-k8s-version-063647/config.json: {Name:mkd269866aecb4e0ebd7c80fae44792dc2e78f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:40.540045 428061 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
I0323 23:26:40.540081 428061 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
I0323 23:26:40.540105 428061 cache.go:193] Successfully downloaded all kic artifacts
I0323 23:26:40.540144 428061 start.go:364] acquiring machines lock for old-k8s-version-063647: {Name:mk836ec8f4a8439e66a7c2c2dcb6074efc06d654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0323 23:26:40.540267 428061 start.go:368] acquired machines lock for "old-k8s-version-063647" in 98.708µs
I0323 23:26:40.540298 428061 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-063647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-063647 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0323 23:26:40.540420 428061 start.go:125] createHost starting for "" (driver="docker")
I0323 23:26:37.666420 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:37.666756 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:37.915164 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:37.934415 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:37.934495 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:37.954816 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:37.954881 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:37.973222 360910 logs.go:277] 0 containers: []
W0323 23:26:37.973245 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:37.973298 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:37.992640 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:37.992731 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:38.012097 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:38.012179 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:38.030328 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:38.030409 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:38.048993 360910 logs.go:277] 0 containers: []
W0323 23:26:38.049024 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:38.049080 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:38.068667 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:38.068707 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:38.068722 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:38.127007 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:38.127040 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:38.127056 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:38.147666 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:38.147691 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:38.168212 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:38.168249 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:38.197795 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:38.197836 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:38.243949 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:38.243989 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:38.264103 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:38.264130 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:38.288660 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:38.288696 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:38.363370 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:38.363403 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:38.386060 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:38.386089 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:38.418791 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:38.418815 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:38.548713 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:38.548764 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:38.579492 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:38.579537 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:38.618692 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:38.618721 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:41.155209 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:41.155664 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:41.415055 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:41.434873 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:41.434945 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:41.455006 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:41.455077 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:41.472882 360910 logs.go:277] 0 containers: []
W0323 23:26:41.472906 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:41.472950 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:41.491292 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:41.491390 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:39.446424 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:41.447016 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:39.280123 427158 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0323 23:26:39.280357 427158 start.go:159] libmachine.API.Create for "force-systemd-env-286741" (driver="docker")
I0323 23:26:39.280387 427158 client.go:168] LocalClient.Create starting
I0323 23:26:39.280458 427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
I0323 23:26:39.280507 427158 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:39.280530 427158 main.go:141] libmachine: Parsing certificate...
I0323 23:26:39.280594 427158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
I0323 23:26:39.280623 427158 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:39.280640 427158 main.go:141] libmachine: Parsing certificate...
I0323 23:26:39.280974 427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0323 23:26:39.354615 427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0323 23:26:39.354704 427158 network_create.go:281] running [docker network inspect force-systemd-env-286741] to gather additional debugging logs...
I0323 23:26:39.354728 427158 cli_runner.go:164] Run: docker network inspect force-systemd-env-286741
W0323 23:26:39.425557 427158 cli_runner.go:211] docker network inspect force-systemd-env-286741 returned with exit code 1
I0323 23:26:39.425596 427158 network_create.go:284] error running [docker network inspect force-systemd-env-286741]: docker network inspect force-systemd-env-286741: exit status 1
stdout:
[]
stderr:
Error response from daemon: network force-systemd-env-286741 not found
I0323 23:26:39.425628 427158 network_create.go:286] output of [docker network inspect force-systemd-env-286741]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network force-systemd-env-286741 not found
** /stderr **
I0323 23:26:39.425680 427158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0323 23:26:39.503698 427158 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
I0323 23:26:39.504676 427158 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
I0323 23:26:39.505710 427158 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
I0323 23:26:39.506685 427158 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
I0323 23:26:39.507885 427158 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00175a3d0}
I0323 23:26:39.507923 427158 network_create.go:123] attempt to create docker network force-systemd-env-286741 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0323 23:26:39.507984 427158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-286741 force-systemd-env-286741
I0323 23:26:39.624494 427158 network_create.go:107] docker network force-systemd-env-286741 192.168.85.0/24 created
I0323 23:26:39.624528 427158 kic.go:117] calculated static IP "192.168.85.2" for the "force-systemd-env-286741" container
I0323 23:26:39.624580 427158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0323 23:26:39.699198 427158 cli_runner.go:164] Run: docker volume create force-systemd-env-286741 --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true
I0323 23:26:39.772552 427158 oci.go:103] Successfully created a docker volume force-systemd-env-286741
I0323 23:26:39.772640 427158 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-286741-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --entrypoint /usr/bin/test -v force-systemd-env-286741:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
I0323 23:26:40.396101 427158 oci.go:107] Successfully prepared a docker volume force-systemd-env-286741
I0323 23:26:40.396169 427158 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
I0323 23:26:40.396201 427158 kic.go:190] Starting extracting preloaded images to volume ...
I0323 23:26:40.396283 427158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
I0323 23:26:43.652059 427158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-286741:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.255698579s)
I0323 23:26:43.652098 427158 kic.go:199] duration metric: took 3.255892 seconds to extract preloaded images to volume
W0323 23:26:43.652249 427158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0323 23:26:43.652340 427158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0323 23:26:43.788292 427158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-286741 --name force-systemd-env-286741 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-286741 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-286741 --network force-systemd-env-286741 --ip 192.168.85.2 --volume force-systemd-env-286741:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
I0323 23:26:40.542931 428061 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0323 23:26:40.543143 428061 start.go:159] libmachine.API.Create for "old-k8s-version-063647" (driver="docker")
I0323 23:26:40.543161 428061 client.go:168] LocalClient.Create starting
I0323 23:26:40.543233 428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem
I0323 23:26:40.543267 428061 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:40.543291 428061 main.go:141] libmachine: Parsing certificate...
I0323 23:26:40.543363 428061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem
I0323 23:26:40.543394 428061 main.go:141] libmachine: Decoding PEM data...
I0323 23:26:40.543409 428061 main.go:141] libmachine: Parsing certificate...
I0323 23:26:40.543830 428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0323 23:26:40.622688 428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0323 23:26:40.622796 428061 network_create.go:281] running [docker network inspect old-k8s-version-063647] to gather additional debugging logs...
I0323 23:26:40.622825 428061 cli_runner.go:164] Run: docker network inspect old-k8s-version-063647
W0323 23:26:40.691850 428061 cli_runner.go:211] docker network inspect old-k8s-version-063647 returned with exit code 1
I0323 23:26:40.691881 428061 network_create.go:284] error running [docker network inspect old-k8s-version-063647]: docker network inspect old-k8s-version-063647: exit status 1
stdout:
[]
stderr:
Error response from daemon: network old-k8s-version-063647 not found
I0323 23:26:40.691895 428061 network_create.go:286] output of [docker network inspect old-k8s-version-063647]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network old-k8s-version-063647 not found
** /stderr **
I0323 23:26:40.691971 428061 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0323 23:26:40.769117 428061 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c8e73f5a026 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0a:b3:fe:c5} reservation:<nil>}
I0323 23:26:40.769965 428061 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-76643bda3762 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a2:b3:ec} reservation:<nil>}
I0323 23:26:40.770928 428061 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2400bfbdd9cf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a7:a4:76:86} reservation:<nil>}
I0323 23:26:40.771945 428061 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cd4e78a8bfb8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a6:13:91:cb} reservation:<nil>}
I0323 23:26:40.773155 428061 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f79741dc633b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:e0:82:cf:7a} reservation:<nil>}
I0323 23:26:40.774473 428061 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e36b0}
I0323 23:26:40.774511 428061 network_create.go:123] attempt to create docker network old-k8s-version-063647 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I0323 23:26:40.774584 428061 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-063647 old-k8s-version-063647
I0323 23:26:40.898151 428061 network_create.go:107] docker network old-k8s-version-063647 192.168.94.0/24 created
I0323 23:26:40.898189 428061 kic.go:117] calculated static IP "192.168.94.2" for the "old-k8s-version-063647" container
I0323 23:26:40.898268 428061 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0323 23:26:40.974566 428061 cli_runner.go:164] Run: docker volume create old-k8s-version-063647 --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true
I0323 23:26:41.045122 428061 oci.go:103] Successfully created a docker volume old-k8s-version-063647
I0323 23:26:41.045212 428061 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
I0323 23:26:44.069733 428061 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-063647-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --entrypoint /usr/bin/test -v old-k8s-version-063647:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib: (3.024480313s)
I0323 23:26:44.069768 428061 oci.go:107] Successfully prepared a docker volume old-k8s-version-063647
I0323 23:26:44.069781 428061 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0323 23:26:44.069803 428061 kic.go:190] Starting extracting preloaded images to volume ...
I0323 23:26:44.069874 428061 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
I0323 23:26:43.946954 401618 pod_ready.go:102] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"False"
I0323 23:26:44.447057 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.447087 401618 pod_ready.go:81] duration metric: took 7.011439342s waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.447102 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452104 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:44.452122 401618 pod_ready.go:81] duration metric: took 5.012337ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:44.452131 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154244 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.154286 401618 pod_ready.go:81] duration metric: took 702.146362ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.154300 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161861 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.161889 401618 pod_ready.go:81] duration metric: took 7.580234ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.161903 401618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166566 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.166596 401618 pod_ready.go:81] duration metric: took 4.684396ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.166605 401618 pod_ready.go:38] duration metric: took 12.254811598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.166630 401618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0323 23:26:45.174654 401618 ops.go:34] apiserver oom_adj: -16
I0323 23:26:45.174677 401618 kubeadm.go:637] restartCluster took 54.651125652s
I0323 23:26:45.174685 401618 kubeadm.go:403] StartCluster complete in 54.678873105s
I0323 23:26:45.174705 401618 settings.go:142] acquiring lock: {Name:mk2143e7b36672d551bcc6ff6483f31f704df2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.174775 401618 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16143-62012/kubeconfig
I0323 23:26:45.175905 401618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16143-62012/kubeconfig: {Name:mkedf19780b2d3cba14a58c9ca6a4f1d32104ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0323 23:26:45.213579 401618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0323 23:26:45.213933 401618 config.go:182] Loaded profile config "pause-574316": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:45.213472 401618 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0323 23:26:45.214148 401618 kapi.go:59] client config for pause-574316: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/profiles/pause-574316/client.key", CAFile:"/home/jenkins/minikube-integration/16143-62012/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x192c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0323 23:26:45.414715 401618 out.go:177] * Enabled addons:
I0323 23:26:45.217242 401618 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-574316" context rescaled to 1 replicas
I0323 23:26:45.430053 401618 addons.go:499] enable addons completed in 216.595091ms: enabled=[]
I0323 23:26:45.430069 401618 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0323 23:26:45.436198 401618 out.go:177] * Verifying Kubernetes components...
I0323 23:26:41.512784 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:41.580770 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:41.604486 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:41.604573 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:41.623789 360910 logs.go:277] 0 containers: []
W0323 23:26:41.623821 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:41.623896 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:41.644226 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:41.644272 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:41.644288 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:41.748676 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:41.748714 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:41.768332 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:41.768367 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:41.792311 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:41.792341 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:41.830521 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:41.830556 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:41.860609 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:41.860650 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:41.932251 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:41.932290 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:41.963057 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:41.963098 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:41.993699 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:41.993742 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:42.025209 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:42.025243 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:42.056243 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:42.056283 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:42.128632 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:42.128657 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:42.128672 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:42.163262 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:42.163298 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:42.188287 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:42.188316 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:44.714609 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:44.715050 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:44.915428 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:44.936310 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:44.936415 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:44.957324 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:44.957387 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:44.980654 360910 logs.go:277] 0 containers: []
W0323 23:26:44.980682 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:44.980734 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:45.003148 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:45.003234 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:45.022249 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:45.022323 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:45.040205 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:45.040282 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:45.057312 360910 logs.go:277] 0 containers: []
W0323 23:26:45.057337 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:45.057385 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:45.080434 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:45.080479 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:45.080495 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:45.104865 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:45.104918 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:45.133666 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:45.133710 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:45.162931 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:45.162970 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0323 23:26:45.202791 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:45.202825 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:45.244277 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:45.244379 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:45.282659 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:45.282742 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:45.313254 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:45.313334 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:45.336545 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:45.336594 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:45.377128 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:45.377170 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:45.514087 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:45.514205 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:45.592082 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:45.592121 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:45.619139 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:45.619172 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:45.678335 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:45.678389 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:45.678404 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:45.436358 401618 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0323 23:26:45.446881 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:45.460908 401618 node_ready.go:35] waiting up to 6m0s for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463792 401618 node_ready.go:49] node "pause-574316" has status "Ready":"True"
I0323 23:26:45.463814 401618 node_ready.go:38] duration metric: took 2.869699ms waiting for node "pause-574316" to be "Ready" ...
I0323 23:26:45.463823 401618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:45.468648 401618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645139 401618 pod_ready.go:92] pod "coredns-787d4945fb-lljqk" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:45.645160 401618 pod_ready.go:81] duration metric: took 176.488938ms waiting for pod "coredns-787d4945fb-lljqk" in "kube-system" namespace to be "Ready" ...
I0323 23:26:45.645170 401618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045231 401618 pod_ready.go:92] pod "etcd-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.045260 401618 pod_ready.go:81] duration metric: took 400.083583ms waiting for pod "etcd-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.045274 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444173 401618 pod_ready.go:92] pod "kube-apiserver-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.444194 401618 pod_ready.go:81] duration metric: took 398.912915ms waiting for pod "kube-apiserver-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.444204 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844571 401618 pod_ready.go:92] pod "kube-controller-manager-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:46.844592 401618 pod_ready.go:81] duration metric: took 400.382744ms waiting for pod "kube-controller-manager-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:46.844602 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244514 401618 pod_ready.go:92] pod "kube-proxy-lnk2t" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.244538 401618 pod_ready.go:81] duration metric: took 399.927693ms waiting for pod "kube-proxy-lnk2t" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.244548 401618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644184 401618 pod_ready.go:92] pod "kube-scheduler-pause-574316" in "kube-system" namespace has status "Ready":"True"
I0323 23:26:47.644203 401618 pod_ready.go:81] duration metric: took 399.648889ms waiting for pod "kube-scheduler-pause-574316" in "kube-system" namespace to be "Ready" ...
I0323 23:26:47.644210 401618 pod_ready.go:38] duration metric: took 2.180378997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0323 23:26:47.644231 401618 api_server.go:51] waiting for apiserver process to appear ...
I0323 23:26:47.644265 401618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0323 23:26:47.660462 401618 api_server.go:71] duration metric: took 2.230343116s to wait for apiserver process to appear ...
I0323 23:26:47.660489 401618 api_server.go:87] waiting for apiserver healthz status ...
I0323 23:26:47.660508 401618 api_server.go:252] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0323 23:26:47.667464 401618 api_server.go:278] https://192.168.67.2:8443/healthz returned 200:
ok
I0323 23:26:47.668285 401618 api_server.go:140] control plane version: v1.26.3
I0323 23:26:47.668303 401618 api_server.go:130] duration metric: took 7.807644ms to wait for apiserver health ...
I0323 23:26:47.668310 401618 system_pods.go:43] waiting for kube-system pods to appear ...
I0323 23:26:47.847116 401618 system_pods.go:59] 6 kube-system pods found
I0323 23:26:47.847153 401618 system_pods.go:61] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:47.847161 401618 system_pods.go:61] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:47.847168 401618 system_pods.go:61] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:47.847175 401618 system_pods.go:61] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:47.847181 401618 system_pods.go:61] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:47.847187 401618 system_pods.go:61] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:47.847193 401618 system_pods.go:74] duration metric: took 178.878592ms to wait for pod list to return data ...
I0323 23:26:47.847201 401618 default_sa.go:34] waiting for default service account to be created ...
I0323 23:26:48.044586 401618 default_sa.go:45] found service account: "default"
I0323 23:26:48.044616 401618 default_sa.go:55] duration metric: took 197.409776ms for default service account to be created ...
I0323 23:26:48.044630 401618 system_pods.go:116] waiting for k8s-apps to be running ...
I0323 23:26:48.247931 401618 system_pods.go:86] 6 kube-system pods found
I0323 23:26:48.247963 401618 system_pods.go:89] "coredns-787d4945fb-lljqk" [ce593e1c-39de-4a21-994e-157f74ab568e] Running
I0323 23:26:48.247974 401618 system_pods.go:89] "etcd-pause-574316" [7169e3e4-7786-4f24-a2dd-72dd5a23fc94] Running
I0323 23:26:48.247980 401618 system_pods.go:89] "kube-apiserver-pause-574316" [b9638a18-2208-4f86-9f5f-164a6129c16d] Running
I0323 23:26:48.247986 401618 system_pods.go:89] "kube-controller-manager-pause-574316" [8b9f404c-2710-4ae3-a29f-739d89bb6b42] Running
I0323 23:26:48.247991 401618 system_pods.go:89] "kube-proxy-lnk2t" [aeba9090-2690-42e1-8439-a0cd55ada6d0] Running
I0323 23:26:48.247999 401618 system_pods.go:89] "kube-scheduler-pause-574316" [f5014d38-c4ac-4952-bf48-afd90549b256] Running
I0323 23:26:48.248007 401618 system_pods.go:126] duration metric: took 203.371205ms to wait for k8s-apps to be running ...
I0323 23:26:48.248015 401618 system_svc.go:44] waiting for kubelet service to be running ....
I0323 23:26:48.248065 401618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0323 23:26:48.258927 401618 system_svc.go:56] duration metric: took 10.902515ms WaitForService to wait for kubelet.
I0323 23:26:48.258954 401618 kubeadm.go:578] duration metric: took 2.828842444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0323 23:26:48.258976 401618 node_conditions.go:102] verifying NodePressure condition ...
I0323 23:26:48.449583 401618 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0323 23:26:48.449608 401618 node_conditions.go:123] node cpu capacity is 8
I0323 23:26:48.449620 401618 node_conditions.go:105] duration metric: took 190.638556ms to run NodePressure ...
I0323 23:26:48.449633 401618 start.go:228] waiting for startup goroutines ...
I0323 23:26:48.449641 401618 start.go:233] waiting for cluster config update ...
I0323 23:26:48.449652 401618 start.go:242] writing updated cluster config ...
I0323 23:26:48.450019 401618 ssh_runner.go:195] Run: rm -f paused
I0323 23:26:48.534780 401618 start.go:554] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0)
I0323 23:26:48.538018 401618 out.go:177] * Done! kubectl is now configured to use "pause-574316" cluster and "default" namespace by default
I0323 23:26:44.308331 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Running}}
I0323 23:26:44.394439 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:44.471392 427158 cli_runner.go:164] Run: docker exec force-systemd-env-286741 stat /var/lib/dpkg/alternatives/iptables
I0323 23:26:44.603293 427158 oci.go:144] the created container "force-systemd-env-286741" has a running status.
I0323 23:26:44.603330 427158 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa...
I0323 23:26:44.920036 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0323 23:26:44.920082 427158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0323 23:26:45.161321 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:45.251141 427158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0323 23:26:45.251176 427158 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-286741 chown docker:docker /home/docker/.ssh/authorized_keys]
I0323 23:26:45.400052 427158 cli_runner.go:164] Run: docker container inspect force-systemd-env-286741 --format={{.State.Status}}
I0323 23:26:45.485912 427158 machine.go:88] provisioning docker machine ...
I0323 23:26:45.485973 427158 ubuntu.go:169] provisioning hostname "force-systemd-env-286741"
I0323 23:26:45.486046 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:45.565967 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:45.566601 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:45.566627 427158 main.go:141] libmachine: About to run SSH command:
sudo hostname force-systemd-env-286741 && echo "force-systemd-env-286741" | sudo tee /etc/hostname
I0323 23:26:45.780316 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-286741
I0323 23:26:45.780413 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:45.856411 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:45.857051 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:45.857097 427158 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sforce-systemd-env-286741' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-286741/g' /etc/hosts;
else
echo '127.0.1.1 force-systemd-env-286741' | sudo tee -a /etc/hosts;
fi
fi
I0323 23:26:45.977892 427158 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0323 23:26:45.977934 427158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
I0323 23:26:45.977978 427158 ubuntu.go:177] setting up certificates
I0323 23:26:45.977996 427158 provision.go:83] configureAuth start
I0323 23:26:45.978074 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:46.057572 427158 provision.go:138] copyHostCerts
I0323 23:26:46.057625 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:26:46.057666 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
I0323 23:26:46.057678 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:26:46.057752 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
I0323 23:26:46.057846 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:26:46.057875 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
I0323 23:26:46.057885 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:26:46.057920 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
I0323 23:26:46.057987 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:26:46.058014 427158 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
I0323 23:26:46.058025 427158 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:26:46.058056 427158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
I0323 23:26:46.058133 427158 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-286741 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-286741]
I0323 23:26:46.508497 427158 provision.go:172] copyRemoteCerts
I0323 23:26:46.508591 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0323 23:26:46.508655 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.583159 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:46.668948 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0323 23:26:46.669009 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0323 23:26:46.687152 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem -> /etc/docker/server.pem
I0323 23:26:46.687222 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
I0323 23:26:46.706760 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0323 23:26:46.706834 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0323 23:26:46.724180 427158 provision.go:86] duration metric: configureAuth took 746.155987ms
I0323 23:26:46.724211 427158 ubuntu.go:193] setting minikube options for container-runtime
I0323 23:26:46.724415 427158 config.go:182] Loaded profile config "force-systemd-env-286741": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
I0323 23:26:46.724478 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.793992 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:46.794421 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:46.794437 427158 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0323 23:26:46.909667 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0323 23:26:46.909696 427158 ubuntu.go:71] root file system type: overlay
I0323 23:26:46.909827 427158 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0323 23:26:46.909896 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:46.979665 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:46.980533 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:46.980649 427158 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0323 23:26:47.134741 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0323 23:26:47.134814 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:47.203471 427158 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:47.203895 427158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33004 <nil> <nil>}
I0323 23:26:47.203914 427158 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0323 23:26:47.958910 427158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-03-23 23:26:47.129506351 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0323 23:26:47.958955 427158 machine.go:91] provisioned docker machine in 2.473006765s
I0323 23:26:47.958969 427158 client.go:171] LocalClient.Create took 8.678571965s
I0323 23:26:47.958985 427158 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-286741" took 8.67862836s
I0323 23:26:47.959002 427158 start.go:300] post-start starting for "force-systemd-env-286741" (driver="docker")
I0323 23:26:47.959010 427158 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0323 23:26:47.959086 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0323 23:26:47.959133 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.039006 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.138241 427158 ssh_runner.go:195] Run: cat /etc/os-release
I0323 23:26:48.141753 427158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0323 23:26:48.141790 427158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0323 23:26:48.141804 427158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0323 23:26:48.141812 427158 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0323 23:26:48.141823 427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/addons for local assets ...
I0323 23:26:48.141882 427158 filesync.go:126] Scanning /home/jenkins/minikube-integration/16143-62012/.minikube/files for local assets ...
I0323 23:26:48.141972 427158 filesync.go:149] local asset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> 687022.pem in /etc/ssl/certs
I0323 23:26:48.141981 427158 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem -> /etc/ssl/certs/687022.pem
I0323 23:26:48.142083 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0323 23:26:48.149479 427158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/files/etc/ssl/certs/687022.pem --> /etc/ssl/certs/687022.pem (1708 bytes)
I0323 23:26:48.170718 427158 start.go:303] post-start completed in 211.698395ms
I0323 23:26:48.171159 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:48.255406 427158 profile.go:148] Saving config to /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/force-systemd-env-286741/config.json ...
I0323 23:26:48.255709 427158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0323 23:26:48.255768 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.348731 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.444848 427158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0323 23:26:48.454096 427158 start.go:128] duration metric: createHost completed in 9.176760391s
I0323 23:26:48.454122 427158 start.go:83] releasing machines lock for "force-systemd-env-286741", held for 9.176923746s
I0323 23:26:48.454203 427158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-286741
I0323 23:26:48.544171 427158 ssh_runner.go:195] Run: cat /version.json
I0323 23:26:48.544227 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.544232 427158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0323 23:26:48.544306 427158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-286741
I0323 23:26:48.702573 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.713344 427158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33004 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/force-systemd-env-286741/id_rsa Username:docker}
I0323 23:26:48.792996 427158 ssh_runner.go:195] Run: systemctl --version
I0323 23:26:47.250761 428061 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16143-62012/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-063647:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (3.180840905s)
I0323 23:26:47.250789 428061 kic.go:199] duration metric: took 3.180984 seconds to extract preloaded images to volume
W0323 23:26:47.250903 428061 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0323 23:26:47.250984 428061 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0323 23:26:47.383772 428061 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-063647 --name old-k8s-version-063647 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-063647 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-063647 --network old-k8s-version-063647 --ip 192.168.94.2 --volume old-k8s-version-063647:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
I0323 23:26:47.858547 428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Running}}
I0323 23:26:47.933060 428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
I0323 23:26:48.018265 428061 cli_runner.go:164] Run: docker exec old-k8s-version-063647 stat /var/lib/dpkg/alternatives/iptables
I0323 23:26:48.141026 428061 oci.go:144] the created container "old-k8s-version-063647" has a running status.
I0323 23:26:48.141055 428061 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa...
I0323 23:26:48.262302 428061 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0323 23:26:48.410628 428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
I0323 23:26:48.521228 428061 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0323 23:26:48.521255 428061 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-063647 chown docker:docker /home/docker/.ssh/authorized_keys]
I0323 23:26:48.710098 428061 cli_runner.go:164] Run: docker container inspect old-k8s-version-063647 --format={{.State.Status}}
I0323 23:26:48.794908 428061 machine.go:88] provisioning docker machine ...
I0323 23:26:48.794950 428061 ubuntu.go:169] provisioning hostname "old-k8s-version-063647"
I0323 23:26:48.795019 428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
I0323 23:26:48.888641 428061 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:48.889333 428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33009 <nil> <nil>}
I0323 23:26:48.889367 428061 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-063647 && echo "old-k8s-version-063647" | sudo tee /etc/hostname
I0323 23:26:49.019472 428061 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-063647
I0323 23:26:49.019553 428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
I0323 23:26:49.106103 428061 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:49.106769 428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33009 <nil> <nil>}
I0323 23:26:49.106808 428061 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-063647' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-063647/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-063647' | sudo tee -a /etc/hosts;
fi
fi
I0323 23:26:49.229809 428061 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0323 23:26:49.229846 428061 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16143-62012/.minikube CaCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16143-62012/.minikube}
I0323 23:26:49.229893 428061 ubuntu.go:177] setting up certificates
I0323 23:26:49.229909 428061 provision.go:83] configureAuth start
I0323 23:26:49.229969 428061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-063647
I0323 23:26:49.322026 428061 provision.go:138] copyHostCerts
I0323 23:26:49.322105 428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem, removing ...
I0323 23:26:49.322114 428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem
I0323 23:26:49.322170 428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/ca.pem (1078 bytes)
I0323 23:26:49.322240 428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem, removing ...
I0323 23:26:49.322244 428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem
I0323 23:26:49.322265 428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/cert.pem (1123 bytes)
I0323 23:26:49.322332 428061 exec_runner.go:144] found /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem, removing ...
I0323 23:26:49.322337 428061 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem
I0323 23:26:49.322355 428061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16143-62012/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16143-62012/.minikube/key.pem (1675 bytes)
I0323 23:26:49.322394 428061 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-063647 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-063647]
I0323 23:26:49.564453 428061 provision.go:172] copyRemoteCerts
I0323 23:26:49.564520 428061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0323 23:26:49.564557 428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
I0323 23:26:49.641965 428061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/16143-62012/.minikube/machines/old-k8s-version-063647/id_rsa Username:docker}
I0323 23:26:49.733721 428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0323 23:26:49.753158 428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0323 23:26:49.771013 428061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16143-62012/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0323 23:26:49.791203 428061 provision.go:86] duration metric: configureAuth took 561.272972ms
I0323 23:26:49.791234 428061 ubuntu.go:193] setting minikube options for container-runtime
I0323 23:26:49.791439 428061 config.go:182] Loaded profile config "old-k8s-version-063647": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0323 23:26:49.791508 428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
I0323 23:26:49.870915 428061 main.go:141] libmachine: Using SSH client type: native
I0323 23:26:49.871642 428061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e3e0] 0x811480 <nil> [] 0s} 127.0.0.1 33009 <nil> <nil>}
I0323 23:26:49.871668 428061 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0323 23:26:49.989937 428061 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0323 23:26:49.989968 428061 ubuntu.go:71] root file system type: overlay
I0323 23:26:49.990126 428061 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0323 23:26:49.990208 428061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-063647
I0323 23:26:48.836687 427158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0323 23:26:48.841606 427158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0323 23:26:48.864185 427158 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0323 23:26:48.864266 427158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0323 23:26:48.881822 427158 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0323 23:26:48.881849 427158 start.go:481] detecting cgroup driver to use...
I0323 23:26:48.881869 427158 start.go:485] using "systemd" cgroup driver as enforced via flags
I0323 23:26:48.881966 427158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0323 23:26:48.898313 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0323 23:26:48.907494 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0323 23:26:48.917456 427158 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
I0323 23:26:48.917564 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0323 23:26:48.927215 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0323 23:26:48.935905 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0323 23:26:48.944134 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0323 23:26:48.952334 427158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0323 23:26:48.959676 427158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0323 23:26:48.971410 427158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0323 23:26:48.979151 427158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0323 23:26:48.986222 427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:26:49.087285 427158 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0323 23:26:49.172432 427158 start.go:481] detecting cgroup driver to use...
I0323 23:26:49.172458 427158 start.go:485] using "systemd" cgroup driver as enforced via flags
I0323 23:26:49.172498 427158 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0323 23:26:49.187707 427158 cruntime.go:276] skipping containerd shutdown because we are bound to it
I0323 23:26:49.187768 427158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0323 23:26:49.201896 427158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0323 23:26:49.216844 427158 ssh_runner.go:195] Run: which cri-dockerd
I0323 23:26:49.220616 427158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0323 23:26:49.229950 427158 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0323 23:26:49.265586 427158 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0323 23:26:49.361874 427158 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0323 23:26:49.456508 427158 docker.go:538] configuring docker to use "systemd" as cgroup driver...
I0323 23:26:49.456538 427158 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
I0323 23:26:49.472752 427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:26:49.573497 427158 ssh_runner.go:195] Run: sudo systemctl restart docker
I0323 23:26:49.819861 427158 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0323 23:26:49.905025 427158 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0323 23:26:49.985735 427158 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0323 23:26:50.077462 427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:26:50.164000 427158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0323 23:26:50.176212 427158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0323 23:26:50.272787 427158 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0323 23:26:50.342739 427158 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0323 23:26:50.342814 427158 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0323 23:26:50.346501 427158 start.go:549] Will wait 60s for crictl version
I0323 23:26:50.346550 427158 ssh_runner.go:195] Run: which crictl
I0323 23:26:50.349633 427158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0323 23:26:50.381308 427158 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 23.0.1
RuntimeApiVersion: v1alpha2
I0323 23:26:50.381358 427158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0323 23:26:50.408501 427158 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0323 23:26:48.219558 360910 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0323 23:26:48.219977 360910 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I0323 23:26:48.415263 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0323 23:26:48.444393 360910 logs.go:277] 2 containers: [e04b42305ee7 0d8b85178a1f]
I0323 23:26:48.444484 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0323 23:26:48.481867 360910 logs.go:277] 1 containers: [a90d829451b2]
I0323 23:26:48.481950 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0323 23:26:48.503185 360910 logs.go:277] 0 containers: []
W0323 23:26:48.503207 360910 logs.go:279] No container was found matching "coredns"
I0323 23:26:48.503253 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0323 23:26:48.526729 360910 logs.go:277] 2 containers: [c527be391322 4bb7f84567d3]
I0323 23:26:48.526806 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0323 23:26:48.553782 360910 logs.go:277] 1 containers: [333ad261cea4]
I0323 23:26:48.553860 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0323 23:26:48.582449 360910 logs.go:277] 2 containers: [9dd80939614e af93893100e7]
I0323 23:26:48.582541 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0323 23:26:48.606619 360910 logs.go:277] 0 containers: []
W0323 23:26:48.606650 360910 logs.go:279] No container was found matching "kindnet"
I0323 23:26:48.606712 360910 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0323 23:26:48.638702 360910 logs.go:277] 1 containers: [eac6b13c2df0]
I0323 23:26:48.638756 360910 logs.go:123] Gathering logs for kube-apiserver [0d8b85178a1f] ...
I0323 23:26:48.638773 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8b85178a1f"
I0323 23:26:48.711513 360910 logs.go:123] Gathering logs for kube-controller-manager [9dd80939614e] ...
I0323 23:26:48.711551 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dd80939614e"
I0323 23:26:48.740243 360910 logs.go:123] Gathering logs for kube-controller-manager [af93893100e7] ...
I0323 23:26:48.740273 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af93893100e7"
I0323 23:26:48.792520 360910 logs.go:123] Gathering logs for kube-scheduler [c527be391322] ...
I0323 23:26:48.792567 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c527be391322"
I0323 23:26:48.891767 360910 logs.go:123] Gathering logs for storage-provisioner [eac6b13c2df0] ...
I0323 23:26:48.891809 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eac6b13c2df0"
I0323 23:26:48.914706 360910 logs.go:123] Gathering logs for Docker ...
I0323 23:26:48.914738 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0323 23:26:48.954192 360910 logs.go:123] Gathering logs for kube-proxy [333ad261cea4] ...
I0323 23:26:48.954221 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333ad261cea4"
I0323 23:26:48.976760 360910 logs.go:123] Gathering logs for kubelet ...
I0323 23:26:48.976795 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0323 23:26:49.093786 360910 logs.go:123] Gathering logs for describe nodes ...
I0323 23:26:49.093831 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0323 23:26:49.162630 360910 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0323 23:26:49.162655 360910 logs.go:123] Gathering logs for kube-apiserver [e04b42305ee7] ...
I0323 23:26:49.162670 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e04b42305ee7"
I0323 23:26:49.195807 360910 logs.go:123] Gathering logs for etcd [a90d829451b2] ...
I0323 23:26:49.195851 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90d829451b2"
I0323 23:26:49.238251 360910 logs.go:123] Gathering logs for kube-scheduler [4bb7f84567d3] ...
I0323 23:26:49.238286 360910 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bb7f84567d3"
I0323 23:26:49.275913 360910 logs.go:123] Gathering logs for dmesg ...
I0323 23:26:49.276000 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0323 23:26:49.302877 360910 logs.go:123] Gathering logs for container status ...
I0323 23:26:49.302974 360910 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
*
* ==> Docker <==
* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:52 UTC. --
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002500928Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002674094Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.002709828Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.003286601Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.025479889Z" level=info msg="Loading containers: start."
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.172830226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.214010134Z" level=info msg="Loading containers: done."
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225800214Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.225888560Z" level=info msg="Daemon has completed initialization"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.240113456Z" level=info msg="[core] [Server #7] Server created" module=grpc
Mar 23 23:25:49 pause-574316 systemd[1]: Started Docker Application Container Engine.
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.246358737Z" level=info msg="API listen on [::]:2376"
Mar 23 23:25:49 pause-574316 dockerd[5186]: time="2023-03-23T23:25:49.256115277Z" level=info msg="API listen on /var/run/docker.sock"
Mar 23 23:26:11 pause-574316 dockerd[5186]: time="2023-03-23T23:26:11.796102440Z" level=info msg="ignoring event" container=6a198df97e4bd33611868552786c34b16a1896b4a18709ad6eaa65e7486b5d20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.145302003Z" level=info msg="ignoring event" container=45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.379532489Z" level=info msg="ignoring event" container=60c1dee0f1786db1b413aa688e7a57acd71e6c18979e95b21131d3496a98cad8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.392985764Z" level=info msg="ignoring event" container=840b0c35d4448d1362a7bc020e0fac35331ad72438dfc00e79685e0baca6b11b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.453179245Z" level=info msg="ignoring event" container=656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.457378879Z" level=info msg="ignoring event" container=f70a37494730e3c42d183c94cd69472a7f672f61f330f75482164f78d4eda989 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.459285840Z" level=info msg="ignoring event" container=2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460667173Z" level=info msg="ignoring event" container=d517e8e4d5d2dbd1822c028a0de7f091686d0e0657198f93573dd122ee6485a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.460699514Z" level=info msg="ignoring event" container=4b1c73f39f8c07193f987da6a6d6784c9f87cb43caa7ea5f424e367b0f2e27e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.467741307Z" level=info msg="ignoring event" container=80c388522552702a89135b09d2d073b9c57d1fbc851a0a89b0cec032be049f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:20 pause-574316 dockerd[5186]: time="2023-03-23T23:26:20.471167750Z" level=info msg="ignoring event" container=7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 23 23:26:25 pause-574316 dockerd[5186]: time="2023-03-23T23:26:25.347736368Z" level=info msg="ignoring event" container=a9b1dc3910d9b5195bfff4b0d6cedbf54b214159654d4e23645c839bf053ad23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0f0398bddb511 5185b96f0becf 20 seconds ago Running coredns 3 542477f9c5e1d
43a8930300a57 92ed2bec97a63 20 seconds ago Running kube-proxy 2 28a061395dad5
e7cd8ca7c7242 5a79047369329 25 seconds ago Running kube-scheduler 3 4c131416edb23
f946ab43717f1 ce8c2293ef09c 25 seconds ago Running kube-controller-manager 3 3ca9ec9bef2c4
1137111a33d08 fce326961ae2d 25 seconds ago Running etcd 3 f4e9af6f99313
cea7ca7eb9ad0 1d9b3cbae03ce 30 seconds ago Running kube-apiserver 2 f84cdf335e887
656b70fafbc2b fce326961ae2d 41 seconds ago Exited etcd 2 60c1dee0f1786
2b7bc2ac835be 5a79047369329 52 seconds ago Exited kube-scheduler 2 4b1c73f39f8c0
7ff3dcd747a3b 92ed2bec97a63 53 seconds ago Exited kube-proxy 1 d517e8e4d5d2d
45416a5cd36b4 ce8c2293ef09c 53 seconds ago Exited kube-controller-manager 2 f70a37494730e
a9b1dc3910d9b 5185b96f0becf About a minute ago Exited coredns 2 840b0c35d4448
6a198df97e4bd 1d9b3cbae03ce About a minute ago Exited kube-apiserver 1 80c3885225527
*
* ==> coredns [0f0398bddb51] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:52573 - 39862 "HINFO IN 4074527240347548607.320685648437704123. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.037884079s
*
* ==> coredns [a9b1dc3910d9] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:45219 - 2821 "HINFO IN 6139167459808748397.3590652508084774261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035135004s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
*
* ==> describe nodes <==
* Name: pause-574316
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-574316
kubernetes.io/os=linux
minikube.k8s.io/commit=e9478c9159ab3ccef5e7f933edc25c8da75bed69
minikube.k8s.io/name=pause-574316
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_23T23_25_21_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Mar 2023 23:25:18 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-574316
AcquireTime: <unset>
RenewTime: Thu, 23 Mar 2023 23:26:51 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 23 Mar 2023 23:26:30 +0000 Thu, 23 Mar 2023 23:25:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: pause-574316
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: b249c14bbd9147e887f6315aff00ef06
System UUID: 7bdff168-7cdd-493c-bdda-f1cc26739b6e
Boot ID: 9d192f19-d9f5-4df3-a502-4030f2da5343
Kernel Version: 5.15.0-1030-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://23.0.1
Kubelet Version: v1.26.3
Kube-Proxy Version: v1.26.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-lljqk 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 78s
kube-system etcd-pause-574316 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 90s
kube-system kube-apiserver-pause-574316 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 93s
kube-system kube-controller-manager-pause-574316 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
kube-system kube-proxy-lnk2t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-scheduler-pause-574316 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 78s kube-proxy
Normal Starting 20s kube-proxy
Normal NodeHasSufficientPID 98s (x3 over 98s) kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 98s (x4 over 98s) kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 98s (x4 over 98s) kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal Starting 91s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 91s kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 91s kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 91s kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeNotReady 91s kubelet Node pause-574316 status is now: NodeNotReady
Normal NodeAllocatableEnforced 91s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 80s kubelet Node pause-574316 status is now: NodeReady
Normal RegisteredNode 79s node-controller Node pause-574316 event: Registered Node pause-574316 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node pause-574316 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node pause-574316 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node pause-574316 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 9s node-controller Node pause-574316 event: Registered Node pause-574316 in Controller
*
* ==> dmesg <==
* [ +0.000619] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff da 9a 31 26 91 58 08 06
[ +46.489619] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff de 03 7b bf b1 b8 08 06
[Mar23 23:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 06 3d f3 17 47 08 06
[Mar23 23:21] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
[ +0.437885] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 0e 93 92 d3 0d 7e 08 06
[Mar23 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 9e 53 5f 42 99 08 06
[Mar23 23:23] process 'docker/tmp/qemu-check941714971/check' started with executable stack
[ +9.389883] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e f3 36 2c c1 cd 08 06
[Mar23 23:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ae cb 28 07 13 77 08 06
[ +0.012995] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 0c 92 4c a9 1c 08 06
[ +15.547404] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 10 ab 83 31 f9 08 06
[Mar23 23:26] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff da 20 81 ad 5c b9 08 06
[ +5.887427] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 6b a8 e3 05 d7 08 06
*
* ==> etcd [1137111a33d0] <==
* {"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-23T23:26:27.969Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:29.059Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-23T23:26:29.060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-23T23:26:29.061Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1088553463] linearizableReadLoop","detail":"{readStateIndex:500; appliedIndex:499; }","duration":"187.629875ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1088553463] 'read index received' (duration: 113.126176ms)","trace[1088553463] 'applied index is now lower than readState.Index' (duration: 74.502878ms)"],"step_count":2}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[1657399943] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"197.637334ms","start":"2023-03-23T23:26:44.948Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[1657399943] 'process raft request' (duration: 123.099553ms)","trace[1657399943] 'compare' (duration: 74.347233ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-23T23:26:45.146Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"187.827176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-574316\" ","response":"range_response_count:1 size:6942"}
{"level":"info","ts":"2023-03-23T23:26:45.146Z","caller":"traceutil/trace.go:171","msg":"trace[666014890] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-574316; range_end:; response_count:1; response_revision:463; }","duration":"187.950429ms","start":"2023-03-23T23:26:44.958Z","end":"2023-03-23T23:26:45.146Z","steps":["trace[666014890] 'agreement among raft nodes before linearized reading' (duration: 187.770048ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-23T23:26:45.429Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.41564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
{"level":"info","ts":"2023-03-23T23:26:45.429Z","caller":"traceutil/trace.go:171","msg":"trace[1689761979] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:463; }","duration":"133.510104ms","start":"2023-03-23T23:26:45.295Z","end":"2023-03-23T23:26:45.429Z","steps":["trace[1689761979] 'range keys from in-memory index tree' (duration: 133.250873ms)"],"step_count":1}
*
* ==> etcd [656b70fafbc2] <==
* {"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-23T23:26:12.885Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-574316 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-23T23:26:14.576Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
{"level":"info","ts":"2023-03-23T23:26:14.577Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-23T23:26:20.377Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
{"level":"info","ts":"2023-03-23T23:26:20.380Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
{"level":"info","ts":"2023-03-23T23:26:20.382Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
{"level":"info","ts":"2023-03-23T23:26:20.384Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-574316","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
*
* ==> kernel <==
* 23:26:52 up 2:09, 0 users, load average: 5.17, 4.13, 2.82
Linux pause-574316 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [6a198df97e4b] <==
* W0323 23:26:08.603014 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0323 23:26:09.405661 1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0323 23:26:09.657900 1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0323 23:26:11.774251 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [cea7ca7eb9ad] <==
* I0323 23:26:30.648351 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0323 23:26:30.648430 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0323 23:26:30.684300 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0323 23:26:30.639853 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I0323 23:26:30.639867 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0323 23:26:30.639933 1 autoregister_controller.go:141] Starting autoregister controller
I0323 23:26:30.690081 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0323 23:26:30.690161 1 cache.go:39] Caches are synced for autoregister controller
I0323 23:26:30.701389 1 shared_informer.go:280] Caches are synced for node_authorizer
I0323 23:26:30.750507 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0323 23:26:30.750975 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0323 23:26:30.752373 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0323 23:26:30.752385 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0323 23:26:30.752497 1 shared_informer.go:280] Caches are synced for configmaps
I0323 23:26:30.753570 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0323 23:26:30.753615 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0323 23:26:31.339987 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0323 23:26:31.646840 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0323 23:26:32.375391 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0323 23:26:32.388141 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0323 23:26:32.474747 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0323 23:26:32.557448 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0323 23:26:32.566478 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0323 23:26:43.845098 1 controller.go:615] quota admission added evaluator for: endpoints
I0323 23:26:43.899216 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [45416a5cd36b] <==
* I0323 23:25:59.829591 1 serving.go:348] Generated self-signed cert in-memory
I0323 23:26:00.084118 1 controllermanager.go:182] Version: v1.26.3
I0323 23:26:00.084152 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:00.085310 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0323 23:26:00.085306 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0323 23:26:00.085554 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0323 23:26:00.085646 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
F0323 23:26:20.087377 1 controllermanager.go:228] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
*
* ==> kube-controller-manager [f946ab43717f] <==
* I0323 23:26:43.682858 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
I0323 23:26:43.685481 1 shared_informer.go:280] Caches are synced for GC
I0323 23:26:43.691799 1 shared_informer.go:280] Caches are synced for HPA
I0323 23:26:43.691846 1 shared_informer.go:280] Caches are synced for daemon sets
I0323 23:26:43.691921 1 shared_informer.go:280] Caches are synced for PVC protection
I0323 23:26:43.691962 1 shared_informer.go:280] Caches are synced for endpoint
I0323 23:26:43.692814 1 shared_informer.go:280] Caches are synced for ephemeral
I0323 23:26:43.692841 1 shared_informer.go:280] Caches are synced for cronjob
I0323 23:26:43.692907 1 shared_informer.go:280] Caches are synced for service account
I0323 23:26:43.696646 1 shared_informer.go:280] Caches are synced for taint
I0323 23:26:43.696746 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
I0323 23:26:43.696779 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
W0323 23:26:43.696843 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-574316. Assuming now as a timestamp.
I0323 23:26:43.696884 1 taint_manager.go:211] "Sending events to api server"
I0323 23:26:43.696913 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0323 23:26:43.697076 1 event.go:294] "Event occurred" object="pause-574316" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-574316 event: Registered Node pause-574316 in Controller"
I0323 23:26:43.698625 1 shared_informer.go:280] Caches are synced for crt configmap
I0323 23:26:43.701545 1 shared_informer.go:280] Caches are synced for endpoint_slice
I0323 23:26:43.740889 1 shared_informer.go:280] Caches are synced for attach detach
I0323 23:26:43.792552 1 shared_informer.go:280] Caches are synced for disruption
I0323 23:26:43.821372 1 shared_informer.go:280] Caches are synced for resource quota
I0323 23:26:43.894489 1 shared_informer.go:280] Caches are synced for resource quota
I0323 23:26:44.210014 1 shared_informer.go:280] Caches are synced for garbage collector
I0323 23:26:44.229157 1 shared_informer.go:280] Caches are synced for garbage collector
I0323 23:26:44.229247 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [43a8930300a5] <==
* I0323 23:26:32.502821 1 node.go:163] Successfully retrieved node IP: 192.168.67.2
I0323 23:26:32.502919 1 server_others.go:109] "Detected node IP" address="192.168.67.2"
I0323 23:26:32.503040 1 server_others.go:535] "Using iptables proxy"
I0323 23:26:32.581352 1 server_others.go:176] "Using iptables Proxier"
I0323 23:26:32.581492 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0323 23:26:32.581507 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0323 23:26:32.581525 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0323 23:26:32.581580 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0323 23:26:32.582126 1 server.go:655] "Version info" version="v1.26.3"
I0323 23:26:32.582166 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:32.582886 1 config.go:226] "Starting endpoint slice config controller"
I0323 23:26:32.583504 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0323 23:26:32.583082 1 config.go:317] "Starting service config controller"
I0323 23:26:32.583523 1 shared_informer.go:273] Waiting for caches to sync for service config
I0323 23:26:32.583137 1 config.go:444] "Starting node config controller"
I0323 23:26:32.583545 1 shared_informer.go:273] Waiting for caches to sync for node config
I0323 23:26:32.684533 1 shared_informer.go:280] Caches are synced for service config
I0323 23:26:32.684613 1 shared_informer.go:280] Caches are synced for node config
I0323 23:26:32.684623 1 shared_informer.go:280] Caches are synced for endpoint slice config
*
* ==> kube-proxy [7ff3dcd747a3] <==
* E0323 23:26:09.977748 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": net/http: TLS handshake timeout
E0323 23:26:12.783360 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.67.2:39882->192.168.67.2:8443: read: connection reset by peer
E0323 23:26:14.853949 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:18.965897 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-574316": dial tcp 192.168.67.2:8443: connect: connection refused
*
* ==> kube-scheduler [2b7bc2ac835b] <==
* W0323 23:26:16.679162 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.679200 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:16.812219 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.812268 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:16.846940 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:16.846981 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:17.007369 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:17.007406 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.575702 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.575741 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.775890 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.775937 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:19.850977 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:19.851021 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.060721 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.060762 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.080470 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.080525 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
W0323 23:26:20.208535 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
E0323 23:26:20.208595 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
I0323 23:26:20.353988 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0323 23:26:20.354103 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0323 23:26:20.354167 1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:20.354182 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0323 23:26:20.354209 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [e7cd8ca7c724] <==
* I0323 23:26:28.403386 1 serving.go:348] Generated self-signed cert in-memory
I0323 23:26:30.771476 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3"
I0323 23:26:30.771503 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0323 23:26:30.778353 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0323 23:26:30.778381 1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
I0323 23:26:30.778428 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0323 23:26:30.778441 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:30.778478 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0323 23:26:30.778489 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0323 23:26:30.779761 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0323 23:26:30.784753 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0323 23:26:30.878975 1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
I0323 23:26:30.879041 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0323 23:26:30.878980 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Thu 2023-03-23 23:25:05 UTC, end at Thu 2023-03-23 23:26:53 UTC. --
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.503080 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bcc950c7983e1395e2f1091ca3b040-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-574316\" (UID: \"16bcc950c7983e1395e2f1091ca3b040\") " pod="kube-system/kube-controller-manager-pause-574316"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.748833 7640 scope.go:115] "RemoveContainer" containerID="656b70fafbc2b7e6611131272fea7433846a18987047e3c8d2e446e8b5290cce"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.763712 7640 scope.go:115] "RemoveContainer" containerID="45416a5cd36b4138409f0bf454eb922e1d3369a86ce1c0c803f7da26778cf7f4"
Mar 23 23:26:27 pause-574316 kubelet[7640]: I0323 23:26:27.773578 7640 scope.go:115] "RemoveContainer" containerID="2b7bc2ac835be2dc569bede97afe45c6357e58e4e23f23539dc1433d3a84bedc"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.818789 7640 kubelet_node_status.go:108] "Node was previously registered" node="pause-574316"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.819442 7640 kubelet_node_status.go:73] "Successfully registered node" node="pause-574316"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.821124 7640 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 23 23:26:30 pause-574316 kubelet[7640]: I0323 23:26:30.827327 7640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.062727 7640 apiserver.go:52] "Watching apiserver"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069251 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069369 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.069450 7640 topology_manager.go:210] "Topology Admit Handler"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.098738 7640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160848 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxzp5\" (UniqueName: \"kubernetes.io/projected/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-api-access-kxzp5\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160919 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm5m\" (UniqueName: \"kubernetes.io/projected/ce593e1c-39de-4a21-994e-157f74ab568e-kube-api-access-8wm5m\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.160966 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-lib-modules\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161002 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce593e1c-39de-4a21-994e-157f74ab568e-config-volume\") pod \"coredns-787d4945fb-lljqk\" (UID: \"ce593e1c-39de-4a21-994e-157f74ab568e\") " pod="kube-system/coredns-787d4945fb-lljqk"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161027 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aeba9090-2690-42e1-8439-a0cd55ada6d0-kube-proxy\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161059 7640 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeba9090-2690-42e1-8439-a0cd55ada6d0-xtables-lock\") pod \"kube-proxy-lnk2t\" (UID: \"aeba9090-2690-42e1-8439-a0cd55ada6d0\") " pod="kube-system/kube-proxy-lnk2t"
Mar 23 23:26:31 pause-574316 kubelet[7640]: I0323 23:26:31.161088 7640 reconciler.go:41] "Reconciler: start to sync state"
Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.271414 7640 scope.go:115] "RemoveContainer" containerID="7ff3dcd747a3b0f733eda143cf5993de0d0e1afd3dbd1b2b2f9a8fd3dbea2be9"
Mar 23 23:26:32 pause-574316 kubelet[7640]: I0323 23:26:32.700707 7640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542477f9c5e1de564352e093d277e29ea04f9ada02cdebe4924d534ea2be3623"
Mar 23 23:26:34 pause-574316 kubelet[7640]: I0323 23:26:34.734860 7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Mar 23 23:26:35 pause-574316 kubelet[7640]: I0323 23:26:35.343216 7640 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014 path="/var/lib/kubelet/pods/05fc3b9f-534f-4c25-ab9a-0f1ea4cb9014/volumes"
Mar 23 23:26:37 pause-574316 kubelet[7640]: I0323 23:26:37.006845 7640 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-574316 -n pause-574316
E0323 23:26:53.591311 68702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16143-62012/.minikube/profiles/skaffold-701809/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run: kubectl --context pause-574316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.12s)