=== RUN TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run: out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker --container-runtime=docker
E0522 17:53:05.801797 16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:53:46.762545 16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:55:08.683507 16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker --container-runtime=docker: exit status 80 (3m36.588944394s)
-- stdout --
* [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18943
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "ha-828033" primary control-plane node in "ha-828033" cluster
* Pulling base image v0.0.44-1715707529-18887 ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
* Pulling base image v0.0.44-1715707529-18887 ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Stopping node "ha-828033-m02" ...
* Powering off "ha-828033-m02" via SSH ...
* Deleting "ha-828033-m02" in docker ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
-- /stdout --
** stderr **
I0522 17:52:51.616388 67740 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:51.616660 67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:51.616670 67740 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:51.616674 67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:51.616882 67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:51.617455 67740 out.go:298] Setting JSON to false
I0522 17:52:51.618613 67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0522 17:52:51.618668 67740 start.go:139] virtualization: kvm guest
I0522 17:52:51.620581 67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0522 17:52:51.621796 67740 out.go:177] - MINIKUBE_LOCATION=18943
I0522 17:52:51.622990 67740 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0522 17:52:51.621903 67740 notify.go:220] Checking for updates...
I0522 17:52:51.625177 67740 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:52:51.626330 67740 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
I0522 17:52:51.627520 67740 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0522 17:52:51.628659 67740 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0522 17:52:51.629817 67740 driver.go:392] Setting default libvirt URI to qemu:///system
I0522 17:52:51.650607 67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
I0522 17:52:51.650716 67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0522 17:52:51.695998 67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0522 17:52:51.696115 67740 docker.go:295] overlay module found
I0522 17:52:51.697872 67740 out.go:177] * Using the docker driver based on user configuration
I0522 17:52:51.699059 67740 start.go:297] selected driver: docker
I0522 17:52:51.699080 67740 start.go:901] validating driver "docker" against <nil>
I0522 17:52:51.699093 67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0522 17:52:51.699900 67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0522 17:52:51.745624 67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0522 17:52:51.745821 67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0522 17:52:51.746041 67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0522 17:52:51.747482 67740 out.go:177] * Using Docker driver with root privileges
I0522 17:52:51.748998 67740 cni.go:84] Creating CNI manager for ""
I0522 17:52:51.749011 67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0522 17:52:51.749020 67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0522 17:52:51.749077 67740 start.go:340] cluster config:
{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
I0522 17:52:51.750256 67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
I0522 17:52:51.751326 67740 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 17:52:51.752481 67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 17:52:51.753555 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:52:51.753579 67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 17:52:51.753585 67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0522 17:52:51.753627 67740 cache.go:56] Caching tarball of preloaded images
I0522 17:52:51.753764 67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 17:52:51.753779 67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 17:52:51.754104 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:52:51.754126 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:52:51.769095 67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 17:52:51.769113 67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 17:52:51.769128 67740 cache.go:194] Successfully downloaded all kic artifacts
I0522 17:52:51.769147 67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:52:51.769223 67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
I0522 17:52:51.769243 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:52:51.769302 67740 start.go:125] createHost starting for "" (driver="docker")
I0522 17:52:51.771035 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:52:51.771256 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:52:51.771318 67740 client.go:168] LocalClient.Create starting
I0522 17:52:51.771394 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:52:51.771429 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:52:51.771446 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:52:51.771502 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:52:51.771520 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:52:51.771528 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:52:51.771801 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0522 17:52:51.786884 67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0522 17:52:51.786972 67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
I0522 17:52:51.787013 67740 cli_runner.go:164] Run: docker network inspect ha-828033
W0522 17:52:51.801352 67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
I0522 17:52:51.801375 67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ha-828033 not found
I0522 17:52:51.801394 67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ha-828033 not found
** /stderr **
I0522 17:52:51.801476 67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:52:51.817609 67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
I0522 17:52:51.817644 67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0522 17:52:51.817690 67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
I0522 17:52:51.866851 67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
I0522 17:52:51.866880 67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
I0522 17:52:51.866949 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:52:51.883567 67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
I0522 17:52:51.902679 67740 oci.go:103] Successfully created a docker volume ha-828033
I0522 17:52:51.902766 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:52:52.415715 67740 oci.go:107] Successfully prepared a docker volume ha-828033
I0522 17:52:52.415766 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:52:52.415787 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:52:52.415843 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:52:56.549014 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
I0522 17:52:56.549059 67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
W0522 17:52:56.549215 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:52:56.549336 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:52:56.595962 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:52:56.872425 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
I0522 17:52:56.891462 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:56.907928 67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
I0522 17:52:56.946756 67740 oci.go:144] the created container "ha-828033" has a running status.
I0522 17:52:56.946795 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
I0522 17:52:57.123336 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:52:57.123383 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:52:57.142261 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:57.162674 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:52:57.162700 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:52:57.249568 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:57.270001 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:52:57.270092 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.288870 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.289150 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.289175 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:52:57.494306 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
I0522 17:52:57.494336 67740 ubuntu.go:169] provisioning hostname "ha-828033"
I0522 17:52:57.494406 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.511445 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.511684 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.511709 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
I0522 17:52:57.632360 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
I0522 17:52:57.632434 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.648419 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.648608 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.648626 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:52:57.762947 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:52:57.762976 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:52:57.762997 67740 ubuntu.go:177] setting up certificates
I0522 17:52:57.763011 67740 provision.go:84] configureAuth start
I0522 17:52:57.763069 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:57.779057 67740 provision.go:143] copyHostCerts
I0522 17:52:57.779092 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
I0522 17:52:57.779116 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
I0522 17:52:57.779121 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
I0522 17:52:57.779194 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
I0522 17:52:57.779293 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
I0522 17:52:57.779410 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
I0522 17:52:57.779430 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
I0522 17:52:57.779491 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
I0522 17:52:57.779566 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
I0522 17:52:57.779592 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
I0522 17:52:57.779602 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
I0522 17:52:57.779638 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
I0522 17:52:57.779711 67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
I0522 17:52:58.158531 67740 provision.go:177] copyRemoteCerts
I0522 17:52:58.158593 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0522 17:52:58.158628 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.174030 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:58.259047 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0522 17:52:58.259096 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0522 17:52:58.279107 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
I0522 17:52:58.279164 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0522 17:52:58.298603 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0522 17:52:58.298655 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0522 17:52:58.318081 67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
I0522 17:52:58.318107 67740 ubuntu.go:193] setting minikube options for container-runtime
I0522 17:52:58.318262 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:58.318307 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.334537 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.334725 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.334739 67740 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0522 17:52:58.443317 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0522 17:52:58.443343 67740 ubuntu.go:71] root file system type: overlay
I0522 17:52:58.443474 67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0522 17:52:58.443540 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.459128 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.459328 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.459387 67740 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0522 17:52:58.581102 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0522 17:52:58.581172 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.597436 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.597600 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.597616 67740 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0522 17:52:59.221776 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-05-08 13:59:39.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-05-22 17:52:58.575464359 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0522 17:52:59.221804 67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
I0522 17:52:59.221825 67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
I0522 17:52:59.221846 67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
I0522 17:52:59.221855 67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
I0522 17:52:59.221867 67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0522 17:52:59.221924 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0522 17:52:59.221966 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.237240 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.323437 67740 ssh_runner.go:195] Run: cat /etc/os-release
I0522 17:52:59.326293 67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0522 17:52:59.326324 67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0522 17:52:59.326337 67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0522 17:52:59.326349 67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0522 17:52:59.326360 67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
I0522 17:52:59.326404 67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
I0522 17:52:59.326472 67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
I0522 17:52:59.326481 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
I0522 17:52:59.326562 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0522 17:52:59.333825 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
I0522 17:52:59.354042 67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
I0522 17:52:59.354355 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:59.369659 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:52:59.369914 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:52:59.369957 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.385473 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.467652 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:52:59.471509 67740 start.go:128] duration metric: took 7.702195096s to createHost
I0522 17:52:59.471529 67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
I0522 17:52:59.471577 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:59.487082 67740 ssh_runner.go:195] Run: cat /version.json
I0522 17:52:59.487134 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.487143 67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0522 17:52:59.487207 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.502998 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.504153 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.582552 67740 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:59.586415 67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0522 17:52:59.653911 67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0522 17:52:59.675707 67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0522 17:52:59.675785 67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0522 17:52:59.699419 67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0522 17:52:59.699447 67740 start.go:494] detecting cgroup driver to use...
I0522 17:52:59.699483 67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0522 17:52:59.699592 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0522 17:52:59.713359 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0522 17:52:59.721747 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0522 17:52:59.729895 67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0522 17:52:59.729949 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0522 17:52:59.738288 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0522 17:52:59.746561 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0522 17:52:59.754810 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0522 17:52:59.762993 67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0522 17:52:59.770726 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0522 17:52:59.778920 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0522 17:52:59.787052 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0522 17:52:59.795263 67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0522 17:52:59.802296 67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0522 17:52:59.809582 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:52:59.883276 67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0522 17:52:59.963129 67740 start.go:494] detecting cgroup driver to use...
I0522 17:52:59.963176 67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0522 17:52:59.963243 67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0522 17:52:59.974498 67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0522 17:52:59.974562 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0522 17:52:59.984764 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0522 17:53:00.000654 67740 ssh_runner.go:195] Run: which cri-dockerd
I0522 17:53:00.003744 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0522 17:53:00.011737 67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0522 17:53:00.029748 67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0522 17:53:00.143798 67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0522 17:53:00.227819 67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0522 17:53:00.227952 67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0522 17:53:00.243383 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.315723 67740 ssh_runner.go:195] Run: sudo systemctl restart docker
I0522 17:53:00.537231 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0522 17:53:00.547492 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0522 17:53:00.557301 67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0522 17:53:00.636990 67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0522 17:53:00.707384 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.778889 67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0522 17:53:00.790448 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0522 17:53:00.799716 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.871781 67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0522 17:53:00.927578 67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0522 17:53:00.927643 67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0522 17:53:00.930933 67740 start.go:562] Will wait 60s for crictl version
I0522 17:53:00.930992 67740 ssh_runner.go:195] Run: which crictl
I0522 17:53:00.934009 67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0522 17:53:00.964626 67740 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.1.2
RuntimeApiVersion: v1
I0522 17:53:00.964671 67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0522 17:53:00.985746 67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0522 17:53:01.008319 67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
I0522 17:53:01.008394 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:53:01.024322 67740 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0522 17:53:01.027742 67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0522 17:53:01.037471 67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0522 17:53:01.037581 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:01.037636 67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0522 17:53:01.054459 67740 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0522 17:53:01.054484 67740 docker.go:615] Images already preloaded, skipping extraction
I0522 17:53:01.054533 67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0522 17:53:01.071182 67740 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0522 17:53:01.071199 67740 cache_images.go:84] Images are preloaded, skipping loading
I0522 17:53:01.071214 67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
I0522 17:53:01.071337 67740 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0522 17:53:01.071392 67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0522 17:53:01.113042 67740 cni.go:84] Creating CNI manager for ""
I0522 17:53:01.113070 67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0522 17:53:01.113090 67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0522 17:53:01.113121 67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0522 17:53:01.113296 67740 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-828033"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0522 17:53:01.113320 67740 kube-vip.go:115] generating kube-vip config ...
I0522 17:53:01.113376 67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I0522 17:53:01.123923 67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:
stderr:
I0522 17:53:01.124031 67740 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0522 17:53:01.124082 67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
I0522 17:53:01.131476 67740 binaries.go:44] Found k8s binaries, skipping transfer
I0522 17:53:01.131533 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0522 17:53:01.138724 67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
I0522 17:53:01.153627 67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0522 17:53:01.168501 67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
I0522 17:53:01.183138 67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
I0522 17:53:01.197801 67740 ssh_runner.go:195] Run: grep 192.168.49.254 control-plane.minikube.internal$ /etc/hosts
I0522 17:53:01.200669 67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0522 17:53:01.209778 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:01.280341 67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0522 17:53:01.292055 67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
I0522 17:53:01.292076 67740 certs.go:194] generating shared ca certs ...
I0522 17:53:01.292094 67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.292206 67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
I0522 17:53:01.292254 67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
I0522 17:53:01.292264 67740 certs.go:256] generating profile certs ...
I0522 17:53:01.292307 67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
I0522 17:53:01.292319 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
I0522 17:53:01.356953 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
I0522 17:53:01.356984 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.357149 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
I0522 17:53:01.357160 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.357241 67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
I0522 17:53:01.357257 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
I0522 17:53:01.556313 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
I0522 17:53:01.556340 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.556500 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
I0522 17:53:01.556513 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.556580 67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
I0522 17:53:01.556650 67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
I0522 17:53:01.556697 67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
I0522 17:53:01.556711 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
I0522 17:53:01.630998 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
I0522 17:53:01.631021 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.631157 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
I0522 17:53:01.631168 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.631230 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0522 17:53:01.631246 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0522 17:53:01.631260 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0522 17:53:01.631309 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0522 17:53:01.631328 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0522 17:53:01.631343 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0522 17:53:01.631356 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0522 17:53:01.631365 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0522 17:53:01.631417 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
W0522 17:53:01.631447 67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
I0522 17:53:01.631457 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
I0522 17:53:01.631479 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
I0522 17:53:01.631502 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
I0522 17:53:01.631523 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
I0522 17:53:01.631558 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
I0522 17:53:01.631582 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.631597 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
I0522 17:53:01.631608 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
I0522 17:53:01.632128 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0522 17:53:01.652751 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0522 17:53:01.672560 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0522 17:53:01.691795 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0522 17:53:01.711301 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0522 17:53:01.731063 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0522 17:53:01.751064 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0522 17:53:01.770695 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0522 17:53:01.790410 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0522 17:53:01.814053 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
I0522 17:53:01.833703 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
I0522 17:53:01.853223 67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0522 17:53:01.868213 67740 ssh_runner.go:195] Run: openssl version
I0522 17:53:01.872673 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
I0522 17:53:01.880830 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
I0522 17:53:01.883744 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
I0522 17:53:01.883792 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
I0522 17:53:01.889587 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
I0522 17:53:01.897227 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0522 17:53:01.904819 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.907709 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.907753 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.913481 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0522 17:53:01.921278 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
I0522 17:53:01.929363 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
I0522 17:53:01.932295 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
I0522 17:53:01.932352 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
I0522 17:53:01.938436 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
I0522 17:53:01.946360 67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0522 17:53:01.949115 67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0522 17:53:01.949164 67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0522 17:53:01.949252 67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0522 17:53:01.965541 67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0522 17:53:01.973093 67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0522 17:53:01.980229 67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
I0522 17:53:01.980270 67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0522 17:53:01.987751 67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0522 17:53:01.987768 67740 kubeadm.go:156] found existing configuration files:
I0522 17:53:01.987805 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0522 17:53:01.994901 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0522 17:53:01.994936 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0522 17:53:02.001636 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0522 17:53:02.008534 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0522 17:53:02.008575 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0522 17:53:02.015362 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0522 17:53:02.022382 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0522 17:53:02.022417 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0522 17:53:02.029147 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0522 17:53:02.036313 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0522 17:53:02.036352 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0522 17:53:02.043146 67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0522 17:53:02.083648 67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
I0522 17:53:02.083709 67740 kubeadm.go:309] [preflight] Running pre-flight checks
I0522 17:53:02.119636 67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
I0522 17:53:02.119808 67740 kubeadm.go:309] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1060-gcp[0m
I0522 17:53:02.119876 67740 kubeadm.go:309] [0;37mOS[0m: [0;32mLinux[0m
I0522 17:53:02.119973 67740 kubeadm.go:309] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0522 17:53:02.120054 67740 kubeadm.go:309] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0522 17:53:02.120145 67740 kubeadm.go:309] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0522 17:53:02.120222 67740 kubeadm.go:309] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0522 17:53:02.120314 67740 kubeadm.go:309] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0522 17:53:02.120387 67740 kubeadm.go:309] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0522 17:53:02.120444 67740 kubeadm.go:309] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0522 17:53:02.120498 67740 kubeadm.go:309] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0522 17:53:02.120559 67740 kubeadm.go:309] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0522 17:53:02.176871 67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
I0522 17:53:02.177025 67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0522 17:53:02.177141 67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0522 17:53:02.372325 67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0522 17:53:02.375701 67740 out.go:204] - Generating certificates and keys ...
I0522 17:53:02.375812 67740 kubeadm.go:309] [certs] Using existing ca certificate authority
I0522 17:53:02.375935 67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
I0522 17:53:02.532924 67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
I0522 17:53:02.638523 67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
I0522 17:53:02.792671 67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
I0522 17:53:02.965135 67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
I0522 17:53:03.124232 67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
I0522 17:53:03.124354 67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0522 17:53:03.226994 67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
I0522 17:53:03.227194 67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0522 17:53:03.284062 67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
I0522 17:53:03.587406 67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
I0522 17:53:03.694896 67740 kubeadm.go:309] [certs] Generating "sa" key and public key
I0522 17:53:03.695247 67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0522 17:53:03.870895 67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
I0522 17:53:04.007853 67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0522 17:53:04.078725 67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0522 17:53:04.260744 67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0522 17:53:04.365893 67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0522 17:53:04.366333 67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0522 17:53:04.368648 67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0522 17:53:04.370859 67740 out.go:204] - Booting up control plane ...
I0522 17:53:04.370979 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0522 17:53:04.371088 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0522 17:53:04.371171 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0522 17:53:04.383092 67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0522 17:53:04.384599 67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0522 17:53:04.384838 67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
I0522 17:53:04.466492 67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0522 17:53:04.466604 67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0522 17:53:05.468427 67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
I0522 17:53:05.468551 67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0522 17:53:11.141380 67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
I0522 17:53:11.152116 67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0522 17:53:11.161056 67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0522 17:53:11.678578 67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
I0522 17:53:11.678814 67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0522 17:53:11.685295 67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
I0522 17:53:11.686669 67740 out.go:204] - Configuring RBAC rules ...
I0522 17:53:11.686814 67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0522 17:53:11.689832 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0522 17:53:11.694718 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0522 17:53:11.699847 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0522 17:53:11.702108 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0522 17:53:11.704239 67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0522 17:53:11.712550 67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0522 17:53:11.974533 67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
I0522 17:53:12.547008 67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
I0522 17:53:12.548083 67740 kubeadm.go:309]
I0522 17:53:12.548149 67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
I0522 17:53:12.548156 67740 kubeadm.go:309]
I0522 17:53:12.548253 67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
I0522 17:53:12.548267 67740 kubeadm.go:309]
I0522 17:53:12.548307 67740 kubeadm.go:309] mkdir -p $HOME/.kube
I0522 17:53:12.548384 67740 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0522 17:53:12.548466 67740 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0522 17:53:12.548477 67740 kubeadm.go:309]
I0522 17:53:12.548545 67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
I0522 17:53:12.548559 67740 kubeadm.go:309]
I0522 17:53:12.548601 67740 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf
I0522 17:53:12.548609 67740 kubeadm.go:309]
I0522 17:53:12.548648 67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
I0522 17:53:12.548713 67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0522 17:53:12.548778 67740 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0522 17:53:12.548785 67740 kubeadm.go:309]
I0522 17:53:12.548889 67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
I0522 17:53:12.548992 67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
I0522 17:53:12.549009 67740 kubeadm.go:309]
I0522 17:53:12.549123 67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
I0522 17:53:12.549259 67740 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
I0522 17:53:12.549291 67740 kubeadm.go:309] --control-plane
I0522 17:53:12.549300 67740 kubeadm.go:309]
I0522 17:53:12.549413 67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
I0522 17:53:12.549427 67740 kubeadm.go:309]
I0522 17:53:12.549530 67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
I0522 17:53:12.549654 67740 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e
I0522 17:53:12.551710 67740 kubeadm.go:309] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
I0522 17:53:12.551839 67740 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0522 17:53:12.551867 67740 cni.go:84] Creating CNI manager for ""
I0522 17:53:12.551876 67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0522 17:53:12.553609 67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0522 17:53:12.554924 67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0522 17:53:12.558498 67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
I0522 17:53:12.558516 67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0522 17:53:12.574461 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0522 17:53:12.755502 67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0522 17:53:12.755579 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:12.755600 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
I0522 17:53:12.850109 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:12.855591 67740 ops.go:34] apiserver oom_adj: -16
I0522 17:53:13.350585 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:13.850559 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:14.350332 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:14.850482 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:15.350200 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:15.850568 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:16.350359 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:16.850559 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:17.350665 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:17.850775 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:18.351191 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:18.850358 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:19.351122 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:19.850171 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:20.350366 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:20.851051 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:21.350960 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:21.851014 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:22.350781 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:22.850795 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:23.350314 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:23.851155 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.351209 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.850179 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.912848 67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
W0522 17:53:24.912892 67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
I0522 17:53:24.912903 67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
I0522 17:53:24.912925 67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:24.912998 67740 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:53:24.913898 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:24.914152 67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:53:24.914177 67740 start.go:240] waiting for startup goroutines ...
I0522 17:53:24.914209 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0522 17:53:24.914186 67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0522 17:53:24.914247 67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
I0522 17:53:24.914265 67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
I0522 17:53:24.914280 67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
I0522 17:53:24.914303 67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
I0522 17:53:24.914307 67740 host.go:66] Checking if "ha-828033" exists ...
I0522 17:53:24.914407 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:53:24.914687 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.914856 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.936661 67740 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0522 17:53:24.935358 67740 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:53:24.938027 67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0522 17:53:24.938051 67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0522 17:53:24.938104 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:53:24.938117 67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0522 17:53:24.938535 67740 cert_rotation.go:137] Starting client certificate rotation controller
I0522 17:53:24.938693 67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
I0522 17:53:24.938728 67740 host.go:66] Checking if "ha-828033" exists ...
I0522 17:53:24.939066 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.955478 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:53:24.964156 67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0522 17:53:24.964174 67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0522 17:53:24.964216 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:53:24.983375 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:53:24.987665 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0522 17:53:25.061038 67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0522 17:53:25.083441 67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0522 17:53:25.371936 67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0522 17:53:25.697836 67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
I0522 17:53:25.697859 67740 round_trippers.go:469] Request Headers:
I0522 17:53:25.697869 67740 round_trippers.go:473] Accept: application/json, */*
I0522 17:53:25.697875 67740 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0522 17:53:25.750106 67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
I0522 17:53:25.750738 67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0522 17:53:25.750766 67740 round_trippers.go:469] Request Headers:
I0522 17:53:25.750775 67740 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0522 17:53:25.750779 67740 round_trippers.go:473] Accept: application/json, */*
I0522 17:53:25.750781 67740 round_trippers.go:473] Content-Type: application/json
I0522 17:53:25.753047 67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0522 17:53:25.754766 67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0522 17:53:25.755957 67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0522 17:53:25.755999 67740 start.go:245] waiting for cluster config update ...
I0522 17:53:25.756022 67740 start.go:254] writing updated cluster config ...
I0522 17:53:25.757404 67740 out.go:177]
I0522 17:53:25.758849 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:53:25.758935 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:53:25.760603 67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
I0522 17:53:25.761714 67740 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 17:53:25.762872 67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 17:53:25.764352 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:25.764396 67740 cache.go:56] Caching tarball of preloaded images
I0522 17:53:25.764446 67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 17:53:25.764489 67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 17:53:25.764505 67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 17:53:25.764593 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:53:25.782684 67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 17:53:25.782710 67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 17:53:25.782728 67740 cache.go:194] Successfully downloaded all kic artifacts
I0522 17:53:25.782765 67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:53:25.782880 67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
I0522 17:53:25.782911 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:53:25.783001 67740 start.go:125] createHost starting for "m02" (driver="docker")
I0522 17:53:25.784711 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:53:25.784832 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:53:25.784852 67740 client.go:168] LocalClient.Create starting
I0522 17:53:25.784917 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:53:25.784953 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:53:25.784985 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:53:25.785059 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:53:25.785087 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:53:25.785100 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:53:25.785951 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:53:25.804785 67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0522 17:53:25.804835 67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
I0522 17:53:25.804904 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:53:25.823769 67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
I0522 17:53:25.840603 67740 oci.go:103] Successfully created a docker volume ha-828033-m02
I0522 17:53:25.840678 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:53:26.430644 67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
I0522 17:53:26.430675 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:26.430699 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:53:26.430758 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:53:30.969362 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
I0522 17:53:30.969399 67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
W0522 17:53:30.969534 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:53:30.969649 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:53:31.025232 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:53:31.438620 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
I0522 17:53:31.457423 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.475562 67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
I0522 17:53:31.519384 67740 oci.go:144] the created container "ha-828033-m02" has a running status.
I0522 17:53:31.519414 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
I0522 17:53:31.724062 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:53:31.724104 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:53:31.751442 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.776640 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:53:31.776667 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:53:31.862090 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.891639 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:53:31.891731 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:31.917156 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:31.917467 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:31.917492 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:53:32.120712 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:53:32.120737 67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
I0522 17:53:32.120785 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:32.137375 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:32.137553 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:32.137567 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
I0522 17:53:32.276420 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:53:32.276522 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:32.298553 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:32.298714 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:32.298729 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:53:32.411237 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:53:32.411298 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:53:32.411322 67740 ubuntu.go:177] setting up certificates
I0522 17:53:32.411342 67740 provision.go:84] configureAuth start
I0522 17:53:32.411438 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.427815 67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
W0522 17:53:32.427838 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.427861 67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.428984 67740 provision.go:84] configureAuth start
I0522 17:53:32.429054 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.445063 67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
W0522 17:53:32.445082 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.445102 67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.446175 67740 provision.go:84] configureAuth start
I0522 17:53:32.446261 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.463887 67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
W0522 17:53:32.463912 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.463934 67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.465043 67740 provision.go:84] configureAuth start
I0522 17:53:32.465105 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.486733 67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
W0522 17:53:32.486759 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.486781 67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.487897 67740 provision.go:84] configureAuth start
I0522 17:53:32.487975 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.507152 67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
W0522 17:53:32.507176 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.507196 67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.508343 67740 provision.go:84] configureAuth start
I0522 17:53:32.508443 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.525068 67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
W0522 17:53:32.525086 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.525106 67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.526213 67740 provision.go:84] configureAuth start
I0522 17:53:32.526268 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.542838 67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
W0522 17:53:32.542858 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.542874 67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.545050 67740 provision.go:84] configureAuth start
I0522 17:53:32.545124 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.567428 67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
W0522 17:53:32.567454 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.567475 67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.570624 67740 provision.go:84] configureAuth start
I0522 17:53:32.570712 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.592038 67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
W0522 17:53:32.592083 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.592109 67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.595345 67740 provision.go:84] configureAuth start
I0522 17:53:32.595474 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.616428 67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
W0522 17:53:32.616444 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.616459 67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.619645 67740 provision.go:84] configureAuth start
I0522 17:53:32.619716 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.636068 67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
W0522 17:53:32.636089 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.636107 67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.641290 67740 provision.go:84] configureAuth start
I0522 17:53:32.641357 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.661778 67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
W0522 17:53:32.661802 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.661830 67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.671042 67740 provision.go:84] configureAuth start
I0522 17:53:32.671123 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.691366 67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
W0522 17:53:32.691389 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.691409 67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.700603 67740 provision.go:84] configureAuth start
I0522 17:53:32.700678 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.720331 67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
W0522 17:53:32.720351 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.720370 67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.738551 67740 provision.go:84] configureAuth start
I0522 17:53:32.738628 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.759082 67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
W0522 17:53:32.759106 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.759126 67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.775330 67740 provision.go:84] configureAuth start
I0522 17:53:32.775414 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.794844 67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
W0522 17:53:32.794868 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.794890 67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.819079 67740 provision.go:84] configureAuth start
I0522 17:53:32.819159 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.834908 67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
W0522 17:53:32.834926 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.834943 67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.883190 67740 provision.go:84] configureAuth start
I0522 17:53:32.883335 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.904257 67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
W0522 17:53:32.904296 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.904322 67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.051578 67740 provision.go:84] configureAuth start
I0522 17:53:33.051698 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.071933 67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
W0522 17:53:33.071959 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.071983 67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.156290 67740 provision.go:84] configureAuth start
I0522 17:53:33.156396 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.176346 67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
W0522 17:53:33.176365 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.176388 67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.365590 67740 provision.go:84] configureAuth start
I0522 17:53:33.365687 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.385235 67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
W0522 17:53:33.385262 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.385284 67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.758500 67740 provision.go:84] configureAuth start
I0522 17:53:33.758620 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.778278 67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
W0522 17:53:33.778300 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.778321 67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.199905 67740 provision.go:84] configureAuth start
I0522 17:53:34.200025 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:34.220225 67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
W0522 17:53:34.220245 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.220261 67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.829960 67740 provision.go:84] configureAuth start
I0522 17:53:34.830073 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:34.847415 67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
W0522 17:53:34.847434 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.847453 67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:36.226841 67740 provision.go:84] configureAuth start
I0522 17:53:36.226917 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:36.244043 67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
W0522 17:53:36.244065 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:36.244085 67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:37.160064 67740 provision.go:84] configureAuth start
I0522 17:53:37.160145 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:37.178672 67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
W0522 17:53:37.178703 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:37.178727 67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:39.003329 67740 provision.go:84] configureAuth start
I0522 17:53:39.003413 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:39.022621 67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
W0522 17:53:39.022641 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:39.022658 67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:43.757466 67740 provision.go:84] configureAuth start
I0522 17:53:43.757544 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:43.774236 67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
W0522 17:53:43.774257 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:43.774290 67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:49.804363 67740 provision.go:84] configureAuth start
I0522 17:53:49.804470 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:49.821435 67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
W0522 17:53:49.821471 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:49.821493 67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:01.052359 67740 provision.go:84] configureAuth start
I0522 17:54:01.052467 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:01.068843 67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
W0522 17:54:01.068864 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:01.068886 67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:12.516410 67740 provision.go:84] configureAuth start
I0522 17:54:12.516501 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:12.532588 67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
W0522 17:54:12.532612 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:12.532630 67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:23.664770 67740 provision.go:84] configureAuth start
I0522 17:54:23.664874 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:23.681171 67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
W0522 17:54:23.681191 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:23.681208 67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.097052 67740 provision.go:84] configureAuth start
I0522 17:54:43.097128 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:43.114019 67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
W0522 17:54:43.114038 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.114058 67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.114064 67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
I0522 17:54:43.114070 67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
I0522 17:54:45.114802 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:54:45.114851 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:54:45.131137 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
I0522 17:54:45.211800 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:54:45.215648 67740 start.go:128] duration metric: took 1m19.43263441s to createHost
I0522 17:54:45.215668 67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
W0522 17:54:45.215682 67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:45.216030 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:45.231847 67740 stop.go:39] StopHost: ha-828033-m02
W0522 17:54:45.232101 67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:45.233821 67740 out.go:177] * Stopping node "ha-828033-m02" ...
I0522 17:54:45.235034 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
W0522 17:54:45.250648 67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:45.252222 67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
I0522 17:54:45.253375 67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
I0522 17:54:46.310178 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:46.325583 67740 oci.go:658] container ha-828033-m02 status is Stopped
I0522 17:54:46.325611 67740 oci.go:670] Successfully shutdown container ha-828033-m02
I0522 17:54:46.325618 67740 stop.go:96] shutdown container: err=<nil>
I0522 17:54:46.325665 67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
I0522 17:54:46.325732 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:46.341372 67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
I0522 17:54:46.341401 67740 stop.go:69] host is already stopped
W0522 17:54:47.341542 67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:47.343381 67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
I0522 17:54:47.344698 67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
I0522 17:54:47.361099 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:47.376628 67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
W0522 17:54:47.392353 67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
I0522 17:54:47.392393 67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
I0522 17:54:48.392556 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:48.408902 67740 oci.go:658] container ha-828033-m02 status is Stopped
I0522 17:54:48.408930 67740 oci.go:670] Successfully shutdown container ha-828033-m02
I0522 17:54:48.408985 67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
I0522 17:54:48.429674 67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
W0522 17:54:48.445584 67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
I0522 17:54:48.445652 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:54:48.460965 67740 cli_runner.go:164] Run: docker network rm ha-828033
W0522 17:54:48.475541 67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
W0522 17:54:48.475635 67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
W0522 17:54:48.475837 67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:48.475849 67740 start.go:728] Will try again in 5 seconds ...
I0522 17:54:53.476927 67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:54:53.477039 67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
I0522 17:54:53.477066 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:54:53.477162 67740 start.go:125] createHost starting for "m02" (driver="docker")
I0522 17:54:53.479034 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:54:53.479153 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:54:53.479185 67740 client.go:168] LocalClient.Create starting
I0522 17:54:53.479249 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:54:53.479310 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:54:53.479333 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:54:53.479397 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:54:53.479424 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:54:53.479441 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:54:53.479649 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:54:53.495874 67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0522 17:54:53.495903 67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
I0522 17:54:53.495960 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:54:53.511000 67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
I0522 17:54:53.526234 67740 oci.go:103] Successfully created a docker volume ha-828033-m02
I0522 17:54:53.526311 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:54:53.904691 67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
I0522 17:54:53.904730 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:54:53.904761 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:54:53.904817 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:54:58.186920 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
I0522 17:54:58.186951 67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
W0522 17:54:58.187117 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:54:58.187205 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:54:58.233376 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:54:58.523486 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
I0522 17:54:58.540206 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.557874 67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
I0522 17:54:58.597167 67740 oci.go:144] the created container "ha-828033-m02" has a running status.
I0522 17:54:58.597198 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
I0522 17:54:58.715099 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:54:58.715136 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:54:58.734167 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.752454 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:54:58.752480 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:54:58.793632 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.811842 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:54:58.811942 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:54:58.831262 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:54:58.831524 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:54:58.831543 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:54:58.832166 67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
I0522 17:55:01.950656 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:55:01.950684 67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
I0522 17:55:01.950756 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:55:01.967254 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:55:01.967478 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:55:01.967497 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
I0522 17:55:02.089579 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:55:02.089655 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:55:02.105960 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:55:02.106178 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:55:02.106203 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:55:02.219113 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:55:02.219142 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:55:02.219165 67740 ubuntu.go:177] setting up certificates
I0522 17:55:02.219178 67740 provision.go:84] configureAuth start
I0522 17:55:02.219229 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.235165 67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
W0522 17:55:02.235185 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.235202 67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.236316 67740 provision.go:84] configureAuth start
I0522 17:55:02.236371 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.251579 67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
W0522 17:55:02.251596 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.251612 67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.252718 67740 provision.go:84] configureAuth start
I0522 17:55:02.252781 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.268254 67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
W0522 17:55:02.268272 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.268289 67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.269405 67740 provision.go:84] configureAuth start
I0522 17:55:02.269470 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.286410 67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
W0522 17:55:02.286429 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.286450 67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.287564 67740 provision.go:84] configureAuth start
I0522 17:55:02.287622 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.302324 67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
W0522 17:55:02.302338 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.302353 67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.303472 67740 provision.go:84] configureAuth start
I0522 17:55:02.303536 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.318179 67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
W0522 17:55:02.318196 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.318213 67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.319311 67740 provision.go:84] configureAuth start
I0522 17:55:02.319362 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.333371 67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
W0522 17:55:02.333386 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.333402 67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.334517 67740 provision.go:84] configureAuth start
I0522 17:55:02.334581 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.350167 67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
W0522 17:55:02.350182 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.350198 67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.352398 67740 provision.go:84] configureAuth start
I0522 17:55:02.352452 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.368273 67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
W0522 17:55:02.368295 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.368312 67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.371499 67740 provision.go:84] configureAuth start
I0522 17:55:02.371558 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.386648 67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
W0522 17:55:02.386668 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.386686 67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.390865 67740 provision.go:84] configureAuth start
I0522 17:55:02.390919 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.406987 67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
W0522 17:55:02.407002 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.407015 67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.414192 67740 provision.go:84] configureAuth start
I0522 17:55:02.414252 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.428668 67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
W0522 17:55:02.428682 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.428697 67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.437877 67740 provision.go:84] configureAuth start
I0522 17:55:02.437947 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.454233 67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
W0522 17:55:02.454251 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.454267 67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.465455 67740 provision.go:84] configureAuth start
I0522 17:55:02.465526 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.481723 67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
W0522 17:55:02.481741 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.481763 67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.500964 67740 provision.go:84] configureAuth start
I0522 17:55:02.501036 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.516727 67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
W0522 17:55:02.516744 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.516762 67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.555967 67740 provision.go:84] configureAuth start
I0522 17:55:02.556066 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.571765 67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
W0522 17:55:02.571791 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.571810 67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.612013 67740 provision.go:84] configureAuth start
I0522 17:55:02.612103 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.628447 67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
W0522 17:55:02.628466 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.628485 67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.662675 67740 provision.go:84] configureAuth start
I0522 17:55:02.662769 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.679445 67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
W0522 17:55:02.679464 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.679484 67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.760653 67740 provision.go:84] configureAuth start
I0522 17:55:02.760738 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.776954 67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
W0522 17:55:02.776979 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.776998 67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.992409 67740 provision.go:84] configureAuth start
I0522 17:55:02.992522 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.009801 67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
W0522 17:55:03.009828 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.009848 67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.158197 67740 provision.go:84] configureAuth start
I0522 17:55:03.158288 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.174209 67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
W0522 17:55:03.174228 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.174245 67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.446454 67740 provision.go:84] configureAuth start
I0522 17:55:03.446568 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.462755 67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
W0522 17:55:03.462775 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.462813 67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.103329 67740 provision.go:84] configureAuth start
I0522 17:55:04.103429 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:04.120167 67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
W0522 17:55:04.120188 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.120208 67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.722980 67740 provision.go:84] configureAuth start
I0522 17:55:04.723059 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:04.739287 67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
W0522 17:55:04.739308 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.739326 67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:06.081721 67740 provision.go:84] configureAuth start
I0522 17:55:06.081836 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:06.098304 67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
W0522 17:55:06.098322 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:06.098338 67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:08.269528 67740 provision.go:84] configureAuth start
I0522 17:55:08.269635 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:08.285825 67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
W0522 17:55:08.285844 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:08.285861 67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:11.663807 67740 provision.go:84] configureAuth start
I0522 17:55:11.663916 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:11.681079 67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
W0522 17:55:11.681112 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:11.681131 67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:14.448404 67740 provision.go:84] configureAuth start
I0522 17:55:14.448485 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:14.465374 67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
W0522 17:55:14.465392 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:14.465408 67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:21.783808 67740 provision.go:84] configureAuth start
I0522 17:55:21.783931 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:21.801618 67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
W0522 17:55:21.801637 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:21.801655 67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:27.552576 67740 provision.go:84] configureAuth start
I0522 17:55:27.552676 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:27.569090 67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
W0522 17:55:27.569109 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:27.569126 67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:40.141724 67740 provision.go:84] configureAuth start
I0522 17:55:40.141836 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:40.158702 67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
W0522 17:55:40.158723 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:40.158743 67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:53.856578 67740 provision.go:84] configureAuth start
I0522 17:55:53.856693 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:53.873246 67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
W0522 17:55:53.873273 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:53.873290 67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.037485 67740 provision.go:84] configureAuth start
I0522 17:56:26.037596 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:56:26.054707 67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
W0522 17:56:26.054725 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.054742 67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.054750 67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
I0522 17:56:26.054758 67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
I0522 17:56:28.055434 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:56:28.055492 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:56:28.072469 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
I0522 17:56:28.155834 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:56:28.159690 67740 start.go:128] duration metric: took 1m34.682513511s to createHost
I0522 17:56:28.159711 67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
W0522 17:56:28.159799 67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:28.161597 67740 out.go:177]
W0522 17:56:28.162787 67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
W0522 17:56:28.162807 67740 out.go:239] *
*
W0522 17:56:28.163671 67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0522 17:56:28.165036 67740 out.go:177]
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:
-- stdout --
[
{
"Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
"Created": "2024-05-22T17:52:56.610182625Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 68363,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-05-22T17:52:56.86490163Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
"ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
"HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
"LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
"Name": "/ha-828033",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"ha-828033:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "ha-828033",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
"MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
"UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
"WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "ha-828033",
"Source": "/var/lib/docker/volumes/ha-828033/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "ha-828033",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "ha-828033",
"name.minikube.sigs.k8s.io": "ha-828033",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
"SandboxKey": "/var/run/docker/netns/214439a25e1a",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32787"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32786"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32783"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32785"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32784"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"ha-828033": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
"EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DriverOpts": null,
"DNSNames": [
"ha-828033",
"a436ef1be4f0"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs:
-- stdout --
==> Audit <==
|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| image | functional-164981 image load --daemon | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | --alsologtostderr | | | | | |
| license | | minikube | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| update-context | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| update-context | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| image | functional-164981 image load --daemon | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| image | functional-164981 image load --daemon | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| image | functional-164981 image save | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image rm | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| image | functional-164981 image load | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| image | functional-164981 image save --daemon | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | gcr.io/google-containers/addon-resizer:functional-164981 | | | | | |
| | --alsologtostderr | | | | | |
| ssh | functional-164981 ssh pgrep | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | |
| | buildkitd | | | | | |
| image | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | |
| | image ls --format json | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | |
| | image ls --format short | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | image ls --format yaml | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | image ls --format table | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-164981 image build -t | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| | localhost/my-image:functional-164981 | | | | | |
| | testdata/build --alsologtostderr | | | | | |
| image | functional-164981 image ls | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| delete | -p functional-164981 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
| start | -p ha-828033 --wait=true | ha-828033 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | |
| | --memory=2200 --ha | | | | | |
| | -v=7 --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/05/22 17:52:51
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.22.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0522 17:52:51.616388 67740 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:51.616660 67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:51.616670 67740 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:51.616674 67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:51.616882 67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:51.617455 67740 out.go:298] Setting JSON to false
I0522 17:52:51.618613 67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0522 17:52:51.618668 67740 start.go:139] virtualization: kvm guest
I0522 17:52:51.620581 67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0522 17:52:51.621796 67740 out.go:177] - MINIKUBE_LOCATION=18943
I0522 17:52:51.622990 67740 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0522 17:52:51.621903 67740 notify.go:220] Checking for updates...
I0522 17:52:51.625177 67740 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:52:51.626330 67740 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
I0522 17:52:51.627520 67740 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0522 17:52:51.628659 67740 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0522 17:52:51.629817 67740 driver.go:392] Setting default libvirt URI to qemu:///system
I0522 17:52:51.650607 67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
I0522 17:52:51.650716 67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0522 17:52:51.695998 67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0522 17:52:51.696115 67740 docker.go:295] overlay module found
I0522 17:52:51.697872 67740 out.go:177] * Using the docker driver based on user configuration
I0522 17:52:51.699059 67740 start.go:297] selected driver: docker
I0522 17:52:51.699080 67740 start.go:901] validating driver "docker" against <nil>
I0522 17:52:51.699093 67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0522 17:52:51.699900 67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0522 17:52:51.745624 67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0522 17:52:51.745821 67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0522 17:52:51.746041 67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0522 17:52:51.747482 67740 out.go:177] * Using Docker driver with root privileges
I0522 17:52:51.748998 67740 cni.go:84] Creating CNI manager for ""
I0522 17:52:51.749011 67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
I0522 17:52:51.749020 67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0522 17:52:51.749077 67740 start.go:340] cluster config:
{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
I0522 17:52:51.750256 67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
I0522 17:52:51.751326 67740 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 17:52:51.752481 67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 17:52:51.753555 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:52:51.753579 67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 17:52:51.753585 67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0522 17:52:51.753627 67740 cache.go:56] Caching tarball of preloaded images
I0522 17:52:51.753764 67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 17:52:51.753779 67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 17:52:51.754104 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:52:51.754126 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:52:51.769095 67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 17:52:51.769113 67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 17:52:51.769128 67740 cache.go:194] Successfully downloaded all kic artifacts
I0522 17:52:51.769147 67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:52:51.769223 67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
I0522 17:52:51.769243 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:52:51.769302 67740 start.go:125] createHost starting for "" (driver="docker")
I0522 17:52:51.771035 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:52:51.771256 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:52:51.771318 67740 client.go:168] LocalClient.Create starting
I0522 17:52:51.771394 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:52:51.771429 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:52:51.771446 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:52:51.771502 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:52:51.771520 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:52:51.771528 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:52:51.771801 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0522 17:52:51.786884 67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0522 17:52:51.786972 67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
I0522 17:52:51.787013 67740 cli_runner.go:164] Run: docker network inspect ha-828033
W0522 17:52:51.801352 67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
I0522 17:52:51.801375 67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ha-828033 not found
I0522 17:52:51.801394 67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ha-828033 not found
** /stderr **
I0522 17:52:51.801476 67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:52:51.817609 67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
I0522 17:52:51.817644 67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0522 17:52:51.817690 67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
I0522 17:52:51.866851 67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
I0522 17:52:51.866880 67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
I0522 17:52:51.866949 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:52:51.883567 67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
I0522 17:52:51.902679 67740 oci.go:103] Successfully created a docker volume ha-828033
I0522 17:52:51.902766 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:52:52.415715 67740 oci.go:107] Successfully prepared a docker volume ha-828033
I0522 17:52:52.415766 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:52:52.415787 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:52:52.415843 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:52:56.549014 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
I0522 17:52:56.549059 67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
W0522 17:52:56.549215 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:52:56.549336 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:52:56.595962 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:52:56.872425 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
I0522 17:52:56.891462 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:56.907928 67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
I0522 17:52:56.946756 67740 oci.go:144] the created container "ha-828033" has a running status.
I0522 17:52:56.946795 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
I0522 17:52:57.123336 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:52:57.123383 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:52:57.142261 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:57.162674 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:52:57.162700 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:52:57.249568 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:52:57.270001 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:52:57.270092 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.288870 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.289150 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.289175 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:52:57.494306 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
I0522 17:52:57.494336 67740 ubuntu.go:169] provisioning hostname "ha-828033"
I0522 17:52:57.494406 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.511445 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.511684 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.511709 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
I0522 17:52:57.632360 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
I0522 17:52:57.632434 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:57.648419 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:57.648608 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:57.648626 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:52:57.762947 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:52:57.762976 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:52:57.762997 67740 ubuntu.go:177] setting up certificates
I0522 17:52:57.763011 67740 provision.go:84] configureAuth start
I0522 17:52:57.763069 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:57.779057 67740 provision.go:143] copyHostCerts
I0522 17:52:57.779092 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
I0522 17:52:57.779116 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
I0522 17:52:57.779121 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
I0522 17:52:57.779194 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
I0522 17:52:57.779293 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
I0522 17:52:57.779410 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
I0522 17:52:57.779430 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
I0522 17:52:57.779491 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
I0522 17:52:57.779566 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
I0522 17:52:57.779592 67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
I0522 17:52:57.779602 67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
I0522 17:52:57.779638 67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
I0522 17:52:57.779711 67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
I0522 17:52:58.158531 67740 provision.go:177] copyRemoteCerts
I0522 17:52:58.158593 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0522 17:52:58.158628 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.174030 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:58.259047 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0522 17:52:58.259096 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0522 17:52:58.279107 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
I0522 17:52:58.279164 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0522 17:52:58.298603 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0522 17:52:58.298655 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0522 17:52:58.318081 67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
I0522 17:52:58.318107 67740 ubuntu.go:193] setting minikube options for container-runtime
I0522 17:52:58.318262 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:58.318307 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.334537 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.334725 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.334739 67740 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0522 17:52:58.443317 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0522 17:52:58.443343 67740 ubuntu.go:71] root file system type: overlay
I0522 17:52:58.443474 67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0522 17:52:58.443540 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.459128 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.459328 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.459387 67740 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0522 17:52:58.581102 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0522 17:52:58.581172 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:58.597436 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:52:58.597600 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32787 <nil> <nil>}
I0522 17:52:58.597616 67740 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0522 17:52:59.221776 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-05-08 13:59:39.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-05-22 17:52:58.575464359 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0522 17:52:59.221804 67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
I0522 17:52:59.221825 67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
I0522 17:52:59.221846 67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
I0522 17:52:59.221855 67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
I0522 17:52:59.221867 67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0522 17:52:59.221924 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0522 17:52:59.221966 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.237240 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.323437 67740 ssh_runner.go:195] Run: cat /etc/os-release
I0522 17:52:59.326293 67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0522 17:52:59.326324 67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0522 17:52:59.326337 67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0522 17:52:59.326349 67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0522 17:52:59.326360 67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
I0522 17:52:59.326404 67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
I0522 17:52:59.326472 67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
I0522 17:52:59.326481 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
I0522 17:52:59.326562 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0522 17:52:59.333825 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
I0522 17:52:59.354042 67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
I0522 17:52:59.354355 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:59.369659 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:52:59.369914 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:52:59.369957 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.385473 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.467652 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:52:59.471509 67740 start.go:128] duration metric: took 7.702195096s to createHost
I0522 17:52:59.471529 67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
I0522 17:52:59.471577 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
I0522 17:52:59.487082 67740 ssh_runner.go:195] Run: cat /version.json
I0522 17:52:59.487134 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.487143 67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0522 17:52:59.487207 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:52:59.502998 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.504153 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:52:59.582552 67740 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:59.586415 67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0522 17:52:59.653911 67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0522 17:52:59.675707 67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0522 17:52:59.675785 67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0522 17:52:59.699419 67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0522 17:52:59.699447 67740 start.go:494] detecting cgroup driver to use...
I0522 17:52:59.699483 67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0522 17:52:59.699592 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0522 17:52:59.713359 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0522 17:52:59.721747 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0522 17:52:59.729895 67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0522 17:52:59.729949 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0522 17:52:59.738288 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0522 17:52:59.746561 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0522 17:52:59.754810 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0522 17:52:59.762993 67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0522 17:52:59.770726 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0522 17:52:59.778920 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0522 17:52:59.787052 67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0522 17:52:59.795263 67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0522 17:52:59.802296 67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0522 17:52:59.809582 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:52:59.883276 67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0522 17:52:59.963129 67740 start.go:494] detecting cgroup driver to use...
I0522 17:52:59.963176 67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0522 17:52:59.963243 67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0522 17:52:59.974498 67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0522 17:52:59.974562 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0522 17:52:59.984764 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0522 17:53:00.000654 67740 ssh_runner.go:195] Run: which cri-dockerd
I0522 17:53:00.003744 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0522 17:53:00.011737 67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0522 17:53:00.029748 67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0522 17:53:00.143798 67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0522 17:53:00.227819 67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0522 17:53:00.227952 67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0522 17:53:00.243383 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.315723 67740 ssh_runner.go:195] Run: sudo systemctl restart docker
I0522 17:53:00.537231 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0522 17:53:00.547492 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0522 17:53:00.557301 67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0522 17:53:00.636990 67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0522 17:53:00.707384 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.778889 67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0522 17:53:00.790448 67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0522 17:53:00.799716 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:00.871781 67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0522 17:53:00.927578 67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0522 17:53:00.927643 67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0522 17:53:00.930933 67740 start.go:562] Will wait 60s for crictl version
I0522 17:53:00.930992 67740 ssh_runner.go:195] Run: which crictl
I0522 17:53:00.934009 67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0522 17:53:00.964626 67740 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.1.2
RuntimeApiVersion: v1
I0522 17:53:00.964671 67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0522 17:53:00.985746 67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0522 17:53:01.008319 67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
I0522 17:53:01.008394 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:53:01.024322 67740 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0522 17:53:01.027742 67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0522 17:53:01.037471 67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0522 17:53:01.037581 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:01.037636 67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0522 17:53:01.054459 67740 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0522 17:53:01.054484 67740 docker.go:615] Images already preloaded, skipping extraction
I0522 17:53:01.054533 67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0522 17:53:01.071182 67740 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0522 17:53:01.071199 67740 cache_images.go:84] Images are preloaded, skipping loading
I0522 17:53:01.071214 67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
I0522 17:53:01.071337 67740 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0522 17:53:01.071392 67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0522 17:53:01.113042 67740 cni.go:84] Creating CNI manager for ""
I0522 17:53:01.113070 67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0522 17:53:01.113090 67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0522 17:53:01.113121 67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0522 17:53:01.113296 67740 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-828033"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0522 17:53:01.113320 67740 kube-vip.go:115] generating kube-vip config ...
I0522 17:53:01.113376 67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
I0522 17:53:01.123923 67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
stdout:
stderr:
I0522 17:53:01.124031 67740 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.49.254
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/super-admin.conf"
name: kubeconfig
status: {}
I0522 17:53:01.124082 67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
I0522 17:53:01.131476 67740 binaries.go:44] Found k8s binaries, skipping transfer
I0522 17:53:01.131533 67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0522 17:53:01.138724 67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
I0522 17:53:01.153627 67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0522 17:53:01.168501 67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
I0522 17:53:01.183138 67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
I0522 17:53:01.197801 67740 ssh_runner.go:195] Run: grep 192.168.49.254 control-plane.minikube.internal$ /etc/hosts
I0522 17:53:01.200669 67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0522 17:53:01.209778 67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0522 17:53:01.280341 67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0522 17:53:01.292055 67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
I0522 17:53:01.292076 67740 certs.go:194] generating shared ca certs ...
I0522 17:53:01.292094 67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.292206 67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
I0522 17:53:01.292254 67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
I0522 17:53:01.292264 67740 certs.go:256] generating profile certs ...
I0522 17:53:01.292307 67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
I0522 17:53:01.292319 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
I0522 17:53:01.356953 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
I0522 17:53:01.356984 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.357149 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
I0522 17:53:01.357160 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.357241 67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
I0522 17:53:01.357257 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
I0522 17:53:01.556313 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
I0522 17:53:01.556340 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.556500 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
I0522 17:53:01.556513 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.556580 67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
I0522 17:53:01.556650 67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
I0522 17:53:01.556697 67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
I0522 17:53:01.556711 67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
I0522 17:53:01.630998 67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
I0522 17:53:01.631021 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.631157 67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
I0522 17:53:01.631168 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:01.631230 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0522 17:53:01.631246 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0522 17:53:01.631260 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0522 17:53:01.631309 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0522 17:53:01.631328 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0522 17:53:01.631343 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0522 17:53:01.631356 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0522 17:53:01.631365 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0522 17:53:01.631417 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
W0522 17:53:01.631447 67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
I0522 17:53:01.631457 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
I0522 17:53:01.631479 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
I0522 17:53:01.631502 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
I0522 17:53:01.631523 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
I0522 17:53:01.631558 67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
I0522 17:53:01.631582 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.631597 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
I0522 17:53:01.631608 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
I0522 17:53:01.632128 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0522 17:53:01.652751 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0522 17:53:01.672560 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0522 17:53:01.691795 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0522 17:53:01.711301 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0522 17:53:01.731063 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0522 17:53:01.751064 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0522 17:53:01.770695 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0522 17:53:01.790410 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0522 17:53:01.814053 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
I0522 17:53:01.833703 67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
I0522 17:53:01.853223 67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0522 17:53:01.868213 67740 ssh_runner.go:195] Run: openssl version
I0522 17:53:01.872673 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
I0522 17:53:01.880830 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
I0522 17:53:01.883744 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
I0522 17:53:01.883792 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
I0522 17:53:01.889587 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
I0522 17:53:01.897227 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0522 17:53:01.904819 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.907709 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.907753 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0522 17:53:01.913481 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0522 17:53:01.921278 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
I0522 17:53:01.929363 67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
I0522 17:53:01.932295 67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
I0522 17:53:01.932352 67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
I0522 17:53:01.938436 67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
I0522 17:53:01.946360 67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0522 17:53:01.949115 67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0522 17:53:01.949164 67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0522 17:53:01.949252 67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0522 17:53:01.965541 67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0522 17:53:01.973093 67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0522 17:53:01.980229 67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
I0522 17:53:01.980270 67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0522 17:53:01.987751 67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0522 17:53:01.987768 67740 kubeadm.go:156] found existing configuration files:
I0522 17:53:01.987805 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0522 17:53:01.994901 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0522 17:53:01.994936 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0522 17:53:02.001636 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0522 17:53:02.008534 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0522 17:53:02.008575 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0522 17:53:02.015362 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0522 17:53:02.022382 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0522 17:53:02.022417 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0522 17:53:02.029147 67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0522 17:53:02.036313 67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0522 17:53:02.036352 67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0522 17:53:02.043146 67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0522 17:53:02.083648 67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
I0522 17:53:02.083709 67740 kubeadm.go:309] [preflight] Running pre-flight checks
I0522 17:53:02.119636 67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
I0522 17:53:02.119808 67740 kubeadm.go:309] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1060-gcp[0m
I0522 17:53:02.119876 67740 kubeadm.go:309] [0;37mOS[0m: [0;32mLinux[0m
I0522 17:53:02.119973 67740 kubeadm.go:309] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0522 17:53:02.120054 67740 kubeadm.go:309] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0522 17:53:02.120145 67740 kubeadm.go:309] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0522 17:53:02.120222 67740 kubeadm.go:309] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0522 17:53:02.120314 67740 kubeadm.go:309] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0522 17:53:02.120387 67740 kubeadm.go:309] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0522 17:53:02.120444 67740 kubeadm.go:309] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0522 17:53:02.120498 67740 kubeadm.go:309] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0522 17:53:02.120559 67740 kubeadm.go:309] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0522 17:53:02.176871 67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
I0522 17:53:02.177025 67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0522 17:53:02.177141 67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0522 17:53:02.372325 67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0522 17:53:02.375701 67740 out.go:204] - Generating certificates and keys ...
I0522 17:53:02.375812 67740 kubeadm.go:309] [certs] Using existing ca certificate authority
I0522 17:53:02.375935 67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
I0522 17:53:02.532924 67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
I0522 17:53:02.638523 67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
I0522 17:53:02.792671 67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
I0522 17:53:02.965135 67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
I0522 17:53:03.124232 67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
I0522 17:53:03.124354 67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0522 17:53:03.226994 67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
I0522 17:53:03.227194 67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0522 17:53:03.284062 67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
I0522 17:53:03.587406 67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
I0522 17:53:03.694896 67740 kubeadm.go:309] [certs] Generating "sa" key and public key
I0522 17:53:03.695247 67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0522 17:53:03.870895 67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
I0522 17:53:04.007853 67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0522 17:53:04.078725 67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0522 17:53:04.260744 67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0522 17:53:04.365893 67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0522 17:53:04.366333 67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0522 17:53:04.368648 67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0522 17:53:04.370859 67740 out.go:204] - Booting up control plane ...
I0522 17:53:04.370979 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0522 17:53:04.371088 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0522 17:53:04.371171 67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0522 17:53:04.383092 67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0522 17:53:04.384599 67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0522 17:53:04.384838 67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
I0522 17:53:04.466492 67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0522 17:53:04.466604 67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0522 17:53:05.468427 67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
I0522 17:53:05.468551 67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0522 17:53:11.141380 67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
I0522 17:53:11.152116 67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0522 17:53:11.161056 67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0522 17:53:11.678578 67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
I0522 17:53:11.678814 67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0522 17:53:11.685295 67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
I0522 17:53:11.686669 67740 out.go:204] - Configuring RBAC rules ...
I0522 17:53:11.686814 67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0522 17:53:11.689832 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0522 17:53:11.694718 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0522 17:53:11.699847 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0522 17:53:11.702108 67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0522 17:53:11.704239 67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0522 17:53:11.712550 67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0522 17:53:11.974533 67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
I0522 17:53:12.547008 67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
I0522 17:53:12.548083 67740 kubeadm.go:309]
I0522 17:53:12.548149 67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
I0522 17:53:12.548156 67740 kubeadm.go:309]
I0522 17:53:12.548253 67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
I0522 17:53:12.548267 67740 kubeadm.go:309]
I0522 17:53:12.548307 67740 kubeadm.go:309] mkdir -p $HOME/.kube
I0522 17:53:12.548384 67740 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0522 17:53:12.548466 67740 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0522 17:53:12.548477 67740 kubeadm.go:309]
I0522 17:53:12.548545 67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
I0522 17:53:12.548559 67740 kubeadm.go:309]
I0522 17:53:12.548601 67740 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf
I0522 17:53:12.548609 67740 kubeadm.go:309]
I0522 17:53:12.548648 67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
I0522 17:53:12.548713 67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0522 17:53:12.548778 67740 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0522 17:53:12.548785 67740 kubeadm.go:309]
I0522 17:53:12.548889 67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
I0522 17:53:12.548992 67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
I0522 17:53:12.549009 67740 kubeadm.go:309]
I0522 17:53:12.549123 67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
I0522 17:53:12.549259 67740 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
I0522 17:53:12.549291 67740 kubeadm.go:309] --control-plane
I0522 17:53:12.549300 67740 kubeadm.go:309]
I0522 17:53:12.549413 67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
I0522 17:53:12.549427 67740 kubeadm.go:309]
I0522 17:53:12.549530 67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
I0522 17:53:12.549654 67740 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e
I0522 17:53:12.551710 67740 kubeadm.go:309] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
I0522 17:53:12.551839 67740 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0522 17:53:12.551867 67740 cni.go:84] Creating CNI manager for ""
I0522 17:53:12.551876 67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
I0522 17:53:12.553609 67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0522 17:53:12.554924 67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0522 17:53:12.558498 67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
I0522 17:53:12.558516 67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0522 17:53:12.574461 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0522 17:53:12.755502 67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0522 17:53:12.755579 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:12.755600 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
I0522 17:53:12.850109 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:12.855591 67740 ops.go:34] apiserver oom_adj: -16
I0522 17:53:13.350585 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:13.850559 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:14.350332 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:14.850482 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:15.350200 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:15.850568 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:16.350359 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:16.850559 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:17.350665 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:17.850775 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:18.351191 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:18.850358 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:19.351122 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:19.850171 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:20.350366 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:20.851051 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:21.350960 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:21.851014 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:22.350781 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:22.850795 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:23.350314 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:23.851155 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.351209 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.850179 67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0522 17:53:24.912848 67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
W0522 17:53:24.912892 67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
I0522 17:53:24.912903 67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
I0522 17:53:24.912925 67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:24.912998 67740 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:53:24.913898 67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0522 17:53:24.914152 67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:53:24.914177 67740 start.go:240] waiting for startup goroutines ...
I0522 17:53:24.914209 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0522 17:53:24.914186 67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0522 17:53:24.914247 67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
I0522 17:53:24.914265 67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
I0522 17:53:24.914280 67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
I0522 17:53:24.914303 67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
I0522 17:53:24.914307 67740 host.go:66] Checking if "ha-828033" exists ...
I0522 17:53:24.914407 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:53:24.914687 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.914856 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.936661 67740 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0522 17:53:24.935358 67740 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/18943-9771/kubeconfig
I0522 17:53:24.938027 67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0522 17:53:24.938051 67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0522 17:53:24.938104 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:53:24.938117 67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0522 17:53:24.938535 67740 cert_rotation.go:137] Starting client certificate rotation controller
I0522 17:53:24.938693 67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
I0522 17:53:24.938728 67740 host.go:66] Checking if "ha-828033" exists ...
I0522 17:53:24.939066 67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
I0522 17:53:24.955478 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:53:24.964156 67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0522 17:53:24.964174 67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0522 17:53:24.964216 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
I0522 17:53:24.983375 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
I0522 17:53:24.987665 67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0522 17:53:25.061038 67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0522 17:53:25.083441 67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0522 17:53:25.371936 67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0522 17:53:25.697836 67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
I0522 17:53:25.697859 67740 round_trippers.go:469] Request Headers:
I0522 17:53:25.697869 67740 round_trippers.go:473] Accept: application/json, */*
I0522 17:53:25.697875 67740 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0522 17:53:25.750106 67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
I0522 17:53:25.750738 67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
I0522 17:53:25.750766 67740 round_trippers.go:469] Request Headers:
I0522 17:53:25.750775 67740 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0522 17:53:25.750779 67740 round_trippers.go:473] Accept: application/json, */*
I0522 17:53:25.750781 67740 round_trippers.go:473] Content-Type: application/json
I0522 17:53:25.753047 67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0522 17:53:25.754766 67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0522 17:53:25.755957 67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
I0522 17:53:25.755999 67740 start.go:245] waiting for cluster config update ...
I0522 17:53:25.756022 67740 start.go:254] writing updated cluster config ...
I0522 17:53:25.757404 67740 out.go:177]
I0522 17:53:25.758849 67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:53:25.758935 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:53:25.760603 67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
I0522 17:53:25.761714 67740 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 17:53:25.762872 67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 17:53:25.764352 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:25.764396 67740 cache.go:56] Caching tarball of preloaded images
I0522 17:53:25.764446 67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 17:53:25.764489 67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 17:53:25.764505 67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 17:53:25.764593 67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 17:53:25.782684 67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 17:53:25.782710 67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 17:53:25.782728 67740 cache.go:194] Successfully downloaded all kic artifacts
I0522 17:53:25.782765 67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:53:25.782880 67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
I0522 17:53:25.782911 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:53:25.783001 67740 start.go:125] createHost starting for "m02" (driver="docker")
I0522 17:53:25.784711 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:53:25.784832 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:53:25.784852 67740 client.go:168] LocalClient.Create starting
I0522 17:53:25.784917 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:53:25.784953 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:53:25.784985 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:53:25.785059 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:53:25.785087 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:53:25.785100 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:53:25.785951 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:53:25.804785 67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0522 17:53:25.804835 67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
I0522 17:53:25.804904 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:53:25.823769 67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
I0522 17:53:25.840603 67740 oci.go:103] Successfully created a docker volume ha-828033-m02
I0522 17:53:25.840678 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:53:26.430644 67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
I0522 17:53:26.430675 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:53:26.430699 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:53:26.430758 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:53:30.969362 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
I0522 17:53:30.969399 67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
W0522 17:53:30.969534 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:53:30.969649 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:53:31.025232 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:53:31.438620 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
I0522 17:53:31.457423 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.475562 67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
I0522 17:53:31.519384 67740 oci.go:144] the created container "ha-828033-m02" has a running status.
I0522 17:53:31.519414 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
I0522 17:53:31.724062 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:53:31.724104 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:53:31.751442 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.776640 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:53:31.776667 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:53:31.862090 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:53:31.891639 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:53:31.891731 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:31.917156 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:31.917467 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:31.917492 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:53:32.120712 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:53:32.120737 67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
I0522 17:53:32.120785 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:32.137375 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:32.137553 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:32.137567 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
I0522 17:53:32.276420 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:53:32.276522 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:53:32.298553 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:53:32.298714 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0522 17:53:32.298729 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:53:32.411237 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:53:32.411298 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:53:32.411322 67740 ubuntu.go:177] setting up certificates
I0522 17:53:32.411342 67740 provision.go:84] configureAuth start
I0522 17:53:32.411438 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.427815 67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
W0522 17:53:32.427838 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.427861 67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.428984 67740 provision.go:84] configureAuth start
I0522 17:53:32.429054 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.445063 67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
W0522 17:53:32.445082 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.445102 67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.446175 67740 provision.go:84] configureAuth start
I0522 17:53:32.446261 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.463887 67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
W0522 17:53:32.463912 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.463934 67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.465043 67740 provision.go:84] configureAuth start
I0522 17:53:32.465105 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.486733 67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
W0522 17:53:32.486759 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.486781 67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.487897 67740 provision.go:84] configureAuth start
I0522 17:53:32.487975 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.507152 67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
W0522 17:53:32.507176 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.507196 67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.508343 67740 provision.go:84] configureAuth start
I0522 17:53:32.508443 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.525068 67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
W0522 17:53:32.525086 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.525106 67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.526213 67740 provision.go:84] configureAuth start
I0522 17:53:32.526268 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.542838 67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
W0522 17:53:32.542858 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.542874 67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.545050 67740 provision.go:84] configureAuth start
I0522 17:53:32.545124 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.567428 67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
W0522 17:53:32.567454 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.567475 67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.570624 67740 provision.go:84] configureAuth start
I0522 17:53:32.570712 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.592038 67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
W0522 17:53:32.592083 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.592109 67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.595345 67740 provision.go:84] configureAuth start
I0522 17:53:32.595474 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.616428 67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
W0522 17:53:32.616444 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.616459 67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.619645 67740 provision.go:84] configureAuth start
I0522 17:53:32.619716 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.636068 67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
W0522 17:53:32.636089 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.636107 67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.641290 67740 provision.go:84] configureAuth start
I0522 17:53:32.641357 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.661778 67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
W0522 17:53:32.661802 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.661830 67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.671042 67740 provision.go:84] configureAuth start
I0522 17:53:32.671123 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.691366 67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
W0522 17:53:32.691389 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.691409 67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.700603 67740 provision.go:84] configureAuth start
I0522 17:53:32.700678 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.720331 67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
W0522 17:53:32.720351 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.720370 67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.738551 67740 provision.go:84] configureAuth start
I0522 17:53:32.738628 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.759082 67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
W0522 17:53:32.759106 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.759126 67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.775330 67740 provision.go:84] configureAuth start
I0522 17:53:32.775414 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.794844 67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
W0522 17:53:32.794868 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.794890 67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.819079 67740 provision.go:84] configureAuth start
I0522 17:53:32.819159 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.834908 67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
W0522 17:53:32.834926 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.834943 67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.883190 67740 provision.go:84] configureAuth start
I0522 17:53:32.883335 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:32.904257 67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
W0522 17:53:32.904296 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:32.904322 67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.051578 67740 provision.go:84] configureAuth start
I0522 17:53:33.051698 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.071933 67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
W0522 17:53:33.071959 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.071983 67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.156290 67740 provision.go:84] configureAuth start
I0522 17:53:33.156396 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.176346 67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
W0522 17:53:33.176365 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.176388 67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.365590 67740 provision.go:84] configureAuth start
I0522 17:53:33.365687 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.385235 67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
W0522 17:53:33.385262 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.385284 67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.758500 67740 provision.go:84] configureAuth start
I0522 17:53:33.758620 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:33.778278 67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
W0522 17:53:33.778300 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:33.778321 67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.199905 67740 provision.go:84] configureAuth start
I0522 17:53:34.200025 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:34.220225 67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
W0522 17:53:34.220245 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.220261 67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.829960 67740 provision.go:84] configureAuth start
I0522 17:53:34.830073 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:34.847415 67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
W0522 17:53:34.847434 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:34.847453 67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:36.226841 67740 provision.go:84] configureAuth start
I0522 17:53:36.226917 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:36.244043 67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
W0522 17:53:36.244065 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:36.244085 67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:37.160064 67740 provision.go:84] configureAuth start
I0522 17:53:37.160145 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:37.178672 67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
W0522 17:53:37.178703 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:37.178727 67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:39.003329 67740 provision.go:84] configureAuth start
I0522 17:53:39.003413 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:39.022621 67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
W0522 17:53:39.022641 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:39.022658 67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:43.757466 67740 provision.go:84] configureAuth start
I0522 17:53:43.757544 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:43.774236 67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
W0522 17:53:43.774257 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:43.774290 67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:49.804363 67740 provision.go:84] configureAuth start
I0522 17:53:49.804470 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:53:49.821435 67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
W0522 17:53:49.821471 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:53:49.821493 67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:01.052359 67740 provision.go:84] configureAuth start
I0522 17:54:01.052467 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:01.068843 67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
W0522 17:54:01.068864 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:01.068886 67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:12.516410 67740 provision.go:84] configureAuth start
I0522 17:54:12.516501 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:12.532588 67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
W0522 17:54:12.532612 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:12.532630 67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:23.664770 67740 provision.go:84] configureAuth start
I0522 17:54:23.664874 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:23.681171 67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
W0522 17:54:23.681191 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:23.681208 67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.097052 67740 provision.go:84] configureAuth start
I0522 17:54:43.097128 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:54:43.114019 67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
W0522 17:54:43.114038 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.114058 67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:43.114064 67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
I0522 17:54:43.114070 67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
I0522 17:54:45.114802 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:54:45.114851 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:54:45.131137 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
I0522 17:54:45.211800 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:54:45.215648 67740 start.go:128] duration metric: took 1m19.43263441s to createHost
I0522 17:54:45.215668 67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
W0522 17:54:45.215682 67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:45.216030 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:45.231847 67740 stop.go:39] StopHost: ha-828033-m02
W0522 17:54:45.232101 67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:45.233821 67740 out.go:177] * Stopping node "ha-828033-m02" ...
I0522 17:54:45.235034 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
W0522 17:54:45.250648 67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:45.252222 67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
I0522 17:54:45.253375 67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
I0522 17:54:46.310178 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:46.325583 67740 oci.go:658] container ha-828033-m02 status is Stopped
I0522 17:54:46.325611 67740 oci.go:670] Successfully shutdown container ha-828033-m02
I0522 17:54:46.325618 67740 stop.go:96] shutdown container: err=<nil>
I0522 17:54:46.325665 67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
I0522 17:54:46.325732 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:46.341372 67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
I0522 17:54:46.341401 67740 stop.go:69] host is already stopped
W0522 17:54:47.341542 67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0522 17:54:47.343381 67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
I0522 17:54:47.344698 67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
I0522 17:54:47.361099 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:47.376628 67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
W0522 17:54:47.392353 67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
I0522 17:54:47.392393 67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
I0522 17:54:48.392556 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:48.408902 67740 oci.go:658] container ha-828033-m02 status is Stopped
I0522 17:54:48.408930 67740 oci.go:670] Successfully shutdown container ha-828033-m02
I0522 17:54:48.408985 67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
I0522 17:54:48.429674 67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
W0522 17:54:48.445584 67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
I0522 17:54:48.445652 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:54:48.460965 67740 cli_runner.go:164] Run: docker network rm ha-828033
W0522 17:54:48.475541 67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
W0522 17:54:48.475635 67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
W0522 17:54:48.475837 67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:54:48.475849 67740 start.go:728] Will try again in 5 seconds ...
I0522 17:54:53.476927 67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 17:54:53.477039 67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
I0522 17:54:53.477066 67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0522 17:54:53.477162 67740 start.go:125] createHost starting for "m02" (driver="docker")
I0522 17:54:53.479034 67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0522 17:54:53.479153 67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
I0522 17:54:53.479185 67740 client.go:168] LocalClient.Create starting
I0522 17:54:53.479249 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
I0522 17:54:53.479310 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:54:53.479333 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:54:53.479397 67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
I0522 17:54:53.479424 67740 main.go:141] libmachine: Decoding PEM data...
I0522 17:54:53.479441 67740 main.go:141] libmachine: Parsing certificate...
I0522 17:54:53.479649 67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0522 17:54:53.495874 67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0522 17:54:53.495903 67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
I0522 17:54:53.495960 67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0522 17:54:53.511000 67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
I0522 17:54:53.526234 67740 oci.go:103] Successfully created a docker volume ha-828033-m02
I0522 17:54:53.526311 67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
I0522 17:54:53.904691 67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
I0522 17:54:53.904730 67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 17:54:53.904761 67740 kic.go:194] Starting extracting preloaded images to volume ...
I0522 17:54:53.904817 67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
I0522 17:54:58.186920 67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
I0522 17:54:58.186951 67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
W0522 17:54:58.187117 67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0522 17:54:58.187205 67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0522 17:54:58.233376 67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
I0522 17:54:58.523486 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
I0522 17:54:58.540206 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.557874 67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
I0522 17:54:58.597167 67740 oci.go:144] the created container "ha-828033-m02" has a running status.
I0522 17:54:58.597198 67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
I0522 17:54:58.715099 67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0522 17:54:58.715136 67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0522 17:54:58.734167 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.752454 67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0522 17:54:58.752480 67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0522 17:54:58.793632 67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 17:54:58.811842 67740 machine.go:94] provisionDockerMachine start ...
I0522 17:54:58.811942 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:54:58.831262 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:54:58.831524 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:54:58.831543 67740 main.go:141] libmachine: About to run SSH command:
hostname
I0522 17:54:58.832166 67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
I0522 17:55:01.950656 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:55:01.950684 67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
I0522 17:55:01.950756 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:55:01.967254 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:55:01.967478 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:55:01.967497 67740 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
I0522 17:55:02.089579 67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
I0522 17:55:02.089655 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:55:02.105960 67740 main.go:141] libmachine: Using SSH client type: native
I0522 17:55:02.106178 67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 127.0.0.1 32797 <nil> <nil>}
I0522 17:55:02.106203 67740 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts;
fi
fi
I0522 17:55:02.219113 67740 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0522 17:55:02.219142 67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 17:55:02.219165 67740 ubuntu.go:177] setting up certificates
I0522 17:55:02.219178 67740 provision.go:84] configureAuth start
I0522 17:55:02.219229 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.235165 67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
W0522 17:55:02.235185 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.235202 67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.236316 67740 provision.go:84] configureAuth start
I0522 17:55:02.236371 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.251579 67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
W0522 17:55:02.251596 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.251612 67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.252718 67740 provision.go:84] configureAuth start
I0522 17:55:02.252781 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.268254 67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
W0522 17:55:02.268272 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.268289 67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.269405 67740 provision.go:84] configureAuth start
I0522 17:55:02.269470 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.286410 67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
W0522 17:55:02.286429 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.286450 67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.287564 67740 provision.go:84] configureAuth start
I0522 17:55:02.287622 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.302324 67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
W0522 17:55:02.302338 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.302353 67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.303472 67740 provision.go:84] configureAuth start
I0522 17:55:02.303536 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.318179 67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
W0522 17:55:02.318196 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.318213 67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.319311 67740 provision.go:84] configureAuth start
I0522 17:55:02.319362 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.333371 67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
W0522 17:55:02.333386 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.333402 67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.334517 67740 provision.go:84] configureAuth start
I0522 17:55:02.334581 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.350167 67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
W0522 17:55:02.350182 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.350198 67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.352398 67740 provision.go:84] configureAuth start
I0522 17:55:02.352452 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.368273 67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
W0522 17:55:02.368295 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.368312 67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.371499 67740 provision.go:84] configureAuth start
I0522 17:55:02.371558 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.386648 67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
W0522 17:55:02.386668 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.386686 67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.390865 67740 provision.go:84] configureAuth start
I0522 17:55:02.390919 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.406987 67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
W0522 17:55:02.407002 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.407015 67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.414192 67740 provision.go:84] configureAuth start
I0522 17:55:02.414252 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.428668 67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
W0522 17:55:02.428682 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.428697 67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.437877 67740 provision.go:84] configureAuth start
I0522 17:55:02.437947 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.454233 67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
W0522 17:55:02.454251 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.454267 67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.465455 67740 provision.go:84] configureAuth start
I0522 17:55:02.465526 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.481723 67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
W0522 17:55:02.481741 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.481763 67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.500964 67740 provision.go:84] configureAuth start
I0522 17:55:02.501036 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.516727 67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
W0522 17:55:02.516744 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.516762 67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.555967 67740 provision.go:84] configureAuth start
I0522 17:55:02.556066 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.571765 67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
W0522 17:55:02.571791 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.571810 67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.612013 67740 provision.go:84] configureAuth start
I0522 17:55:02.612103 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.628447 67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
W0522 17:55:02.628466 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.628485 67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.662675 67740 provision.go:84] configureAuth start
I0522 17:55:02.662769 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.679445 67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
W0522 17:55:02.679464 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.679484 67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.760653 67740 provision.go:84] configureAuth start
I0522 17:55:02.760738 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:02.776954 67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
W0522 17:55:02.776979 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.776998 67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:02.992409 67740 provision.go:84] configureAuth start
I0522 17:55:02.992522 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.009801 67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
W0522 17:55:03.009828 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.009848 67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.158197 67740 provision.go:84] configureAuth start
I0522 17:55:03.158288 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.174209 67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
W0522 17:55:03.174228 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.174245 67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.446454 67740 provision.go:84] configureAuth start
I0522 17:55:03.446568 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:03.462755 67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
W0522 17:55:03.462775 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:03.462813 67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.103329 67740 provision.go:84] configureAuth start
I0522 17:55:04.103429 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:04.120167 67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
W0522 17:55:04.120188 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.120208 67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.722980 67740 provision.go:84] configureAuth start
I0522 17:55:04.723059 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:04.739287 67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
W0522 17:55:04.739308 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:04.739326 67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:06.081721 67740 provision.go:84] configureAuth start
I0522 17:55:06.081836 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:06.098304 67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
W0522 17:55:06.098322 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:06.098338 67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:08.269528 67740 provision.go:84] configureAuth start
I0522 17:55:08.269635 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:08.285825 67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
W0522 17:55:08.285844 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:08.285861 67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:11.663807 67740 provision.go:84] configureAuth start
I0522 17:55:11.663916 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:11.681079 67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
W0522 17:55:11.681112 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:11.681131 67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:14.448404 67740 provision.go:84] configureAuth start
I0522 17:55:14.448485 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:14.465374 67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
W0522 17:55:14.465392 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:14.465408 67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:21.783808 67740 provision.go:84] configureAuth start
I0522 17:55:21.783931 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:21.801618 67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
W0522 17:55:21.801637 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:21.801655 67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:27.552576 67740 provision.go:84] configureAuth start
I0522 17:55:27.552676 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:27.569090 67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
W0522 17:55:27.569109 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:27.569126 67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:40.141724 67740 provision.go:84] configureAuth start
I0522 17:55:40.141836 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:40.158702 67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
W0522 17:55:40.158723 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:40.158743 67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:53.856578 67740 provision.go:84] configureAuth start
I0522 17:55:53.856693 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:55:53.873246 67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
W0522 17:55:53.873273 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:55:53.873290 67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.037485 67740 provision.go:84] configureAuth start
I0522 17:56:26.037596 67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 17:56:26.054707 67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
W0522 17:56:26.054725 67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.054742 67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:26.054750 67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
I0522 17:56:26.054758 67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
I0522 17:56:28.055434 67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 17:56:28.055492 67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 17:56:28.072469 67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
I0522 17:56:28.155834 67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 17:56:28.159690 67740 start.go:128] duration metric: took 1m34.682513511s to createHost
I0522 17:56:28.159711 67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
W0522 17:56:28.159799 67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 17:56:28.161597 67740 out.go:177]
W0522 17:56:28.162787 67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
W0522 17:56:28.162807 67740 out.go:239] *
W0522 17:56:28.163671 67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0522 17:56:28.165036 67740 out.go:177]
==> Docker <==
May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c61ca7a89838df4da2bace0fa74ffeab37fcf68c1bd7b502ff0191f46ba59f4/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ca6a020652c5315e9cdab62b3f33c6eff8881ec5e8ef8003e9735e75932adcc6/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1648bcaea393a0b5ddfbf0f768d5e989217a09977f420bcefe5d82554e1e83fe/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06f42956ef3cd3359e1bcca52e41ff3b2048fb4c3c75f96636ea439a7ffe37c9/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d7edccdc49b22ec9cc59e71bc3d4f4089c78b1b448eab3c8012fc9a32dfc290/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:08 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:08Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v0.8.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v0.8.0"
May 22 17:53:25 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7920c4e0230819f5c621ee8ab19a8bd59c1053a4c4c9148fc2ab7993a5422497/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
May 22 17:53:25 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7caff96cd793b86249a3872a817399fd83ab776260c99c039376f84ba3c96e89/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
May 22 17:53:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:30Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
May 22 17:53:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.399087699Z" level=info msg="ignoring event" container=dd5bd702646a46de165f70b974819728d0d1e4dcd480a756580f462132b4e49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.477631846Z" level=info msg="ignoring event" container=8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.558216062Z" level=info msg="ignoring event" container=63f49aaadee913b978ed9eff66b35c52ee24c7ed5fb7c74f4c3fc76578c0f4a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.607461254Z" level=info msg="ignoring event" container=91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
3d03dbb9a9ab6 cbb01a7bd410d 2 minutes ago Running coredns 1 cddac885b8c2a coredns-7db6d8ff4d-dxfhb
f7fd69b1c56b6 cbb01a7bd410d 2 minutes ago Running coredns 1 921c71ab51b29 coredns-7db6d8ff4d-gznzs
f6aa98f9307fc kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8 2 minutes ago Running kindnet-cni 0 7caff96cd793b kindnet-swzdx
4aff7c101c8df 6e38f40d628db 3 minutes ago Running storage-provisioner 0 715e7f0294d0a storage-provisioner
63f49aaadee91 cbb01a7bd410d 3 minutes ago Exited coredns 0 91e8c76c71ae7 coredns-7db6d8ff4d-dxfhb
dd5bd702646a4 cbb01a7bd410d 3 minutes ago Exited coredns 0 8b3fd8cf48c95 coredns-7db6d8ff4d-gznzs
faac4370a3326 747097150317f 3 minutes ago Running kube-proxy 0 7920c4e023081 kube-proxy-fl69s
a9f9b4a4a64a7 ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f 3 minutes ago Running kube-vip 0 1648bcaea393a kube-vip-ha-828033
f457f32fdd43d a52dc94f0a912 3 minutes ago Running kube-scheduler 0 4d7edccdc49b2 kube-scheduler-ha-828033
71559235c3028 91be940803172 3 minutes ago Running kube-apiserver 0 06f42956ef3cd kube-apiserver-ha-828033
3a9c3dbadc741 3861cfcd7c04c 3 minutes ago Running etcd 0 ca6a020652c53 etcd-ha-828033
dce56fa365a91 25a1387cdab82 3 minutes ago Running kube-controller-manager 0 5c61ca7a89838 kube-controller-manager-ha-828033
==> coredns [3d03dbb9a9ab] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
==> coredns [63f49aaadee9] <==
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
==> coredns [dd5bd702646a] <==
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
==> coredns [f7fd69b1c56b] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
==> describe nodes <==
Name: ha-828033
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-828033
kubernetes.io/os=linux
minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
minikube.k8s.io/name=ha-828033
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 22 May 2024 17:53:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-828033
AcquireTime: <unset>
RenewTime: Wed, 22 May 2024 17:56:25 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 22 May 2024 17:53:42 +0000 Wed, 22 May 2024 17:53:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 22 May 2024 17:53:42 +0000 Wed, 22 May 2024 17:53:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 22 May 2024 17:53:42 +0000 Wed, 22 May 2024 17:53:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 22 May 2024 17:53:42 +0000 Wed, 22 May 2024 17:53:12 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: ha-828033
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859356Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859356Ki
pods: 110
System Info:
Machine ID: ae91489c226b473f87d2128d6a868a8a
System UUID: dcef1866-ae43-483c-a65a-94c2bd9ff7da
Boot ID: e5b4465e-51c8-4026-9dab-c7060cf83b22
Kernel Version: 5.15.0-1060-gcp
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.1.2
Kubelet Version: v1.30.1
Kube-Proxy Version: v1.30.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-7db6d8ff4d-dxfhb 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m4s
kube-system coredns-7db6d8ff4d-gznzs 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m4s
kube-system etcd-ha-828033 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
kube-system kindnet-swzdx 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m4s
kube-system kube-apiserver-ha-828033 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
kube-system kube-controller-manager-ha-828033 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
kube-system kube-proxy-fl69s 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m4s
kube-system kube-scheduler-ha-828033 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
kube-system kube-vip-ha-828033 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (11%!)(MISSING) 100m (1%!)(MISSING)
memory 290Mi (0%!)(MISSING) 390Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m3s kube-proxy
Normal Starting 3m18s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m17s kubelet Node ha-828033 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m17s kubelet Node ha-828033 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m17s kubelet Node ha-828033 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m17s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m17s kubelet Node ha-828033 status is now: NodeReady
Normal RegisteredNode 3m5s node-controller Node ha-828033 event: Registered Node ha-828033 in Controller
==> dmesg <==
[ +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
[ +0.008770] FS-Cache: N-key=[8] '0490130200000000'
[ +0.008419] FS-Cache: Duplicate cookie detected
[ +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
[ +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
[ +0.008735] FS-Cache: O-key=[8] '0490130200000000'
[ +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
[ +0.008724] FS-Cache: N-key=[8] '0490130200000000'
[ +2.340067] FS-Cache: Duplicate cookie detected
[ +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
[ +0.007535] FS-Cache: O-key=[8] '0390130200000000'
[ +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
[ +0.008768] FS-Cache: N-key=[8] '0390130200000000'
[ +0.243815] FS-Cache: Duplicate cookie detected
[ +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
[ +0.007354] FS-Cache: O-key=[8] '0690130200000000'
[ +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
[ +0.008723] FS-Cache: N-key=[8] '0690130200000000'
[ +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
==> etcd [3a9c3dbadc74] <==
{"level":"info","ts":"2024-05-22T17:53:06.056485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-05-22T17:53:06.056617Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-05-22T17:53:06.057265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-05-22T17:53:06.057494Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-05-22T17:53:06.057531Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-05-22T17:53:06.0576Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
==> kernel <==
17:56:29 up 38 min, 0 users, load average: 0.37, 0.98, 0.67
Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kindnet [f6aa98f9307f] <==
I0522 17:54:21.579210 1 main.go:227] handling current node
I0522 17:54:31.591187 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:54:31.591210 1 main.go:227] handling current node
I0522 17:54:41.594946 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:54:41.594968 1 main.go:227] handling current node
I0522 17:54:51.606297 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:54:51.606319 1 main.go:227] handling current node
I0522 17:55:01.609679 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:01.609707 1 main.go:227] handling current node
I0522 17:55:11.622453 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:11.622494 1 main.go:227] handling current node
I0522 17:55:21.625731 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:21.625756 1 main.go:227] handling current node
I0522 17:55:31.637651 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:31.637673 1 main.go:227] handling current node
I0522 17:55:41.650145 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:41.650169 1 main.go:227] handling current node
I0522 17:55:51.658448 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:55:51.658468 1 main.go:227] handling current node
I0522 17:56:01.662230 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:56:01.662257 1 main.go:227] handling current node
I0522 17:56:11.674509 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:56:11.674537 1 main.go:227] handling current node
I0522 17:56:21.686569 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
I0522 17:56:21.686592 1 main.go:227] handling current node
==> kube-apiserver [71559235c302] <==
I0522 17:53:09.143916 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0522 17:53:09.143936 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0522 17:53:09.143944 1 shared_informer.go:320] Caches are synced for configmaps
I0522 17:53:09.143872 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0522 17:53:09.143956 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0522 17:53:09.143940 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0522 17:53:09.144006 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0522 17:53:09.147656 1 controller.go:615] quota admission added evaluator for: namespaces
E0522 17:53:09.149131 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0522 17:53:09.352351 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0522 17:53:10.000114 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0522 17:53:10.003662 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0522 17:53:10.003677 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0522 17:53:10.401970 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0522 17:53:10.431664 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0522 17:53:10.564789 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0522 17:53:10.571710 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I0522 17:53:10.572657 1 controller.go:615] quota admission added evaluator for: endpoints
I0522 17:53:10.577337 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0522 17:53:11.057429 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0522 17:53:11.962630 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0522 17:53:11.972651 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0522 17:53:12.162218 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0522 17:53:25.167826 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0522 17:53:25.351601 1 controller.go:615] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [dce56fa365a9] <==
I0522 17:53:24.564624 1 shared_informer.go:320] Caches are synced for endpoint
I0522 17:53:24.564652 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0522 17:53:24.564727 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-828033"
I0522 17:53:24.564763 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0522 17:53:24.564846 1 shared_informer.go:320] Caches are synced for job
I0522 17:53:24.564925 1 shared_informer.go:320] Caches are synced for attach detach
I0522 17:53:24.565192 1 shared_informer.go:320] Caches are synced for deployment
I0522 17:53:24.574805 1 shared_informer.go:320] Caches are synced for resource quota
I0522 17:53:24.615689 1 shared_informer.go:320] Caches are synced for persistent volume
I0522 17:53:24.615718 1 shared_informer.go:320] Caches are synced for PV protection
I0522 17:53:24.619960 1 shared_informer.go:320] Caches are synced for resource quota
I0522 17:53:25.032081 1 shared_informer.go:320] Caches are synced for garbage collector
I0522 17:53:25.143504 1 shared_informer.go:320] Caches are synced for garbage collector
I0522 17:53:25.143551 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0522 17:53:25.467902 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
I0522 17:53:25.473436 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
I0522 17:53:25.473538 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
I0522 17:53:25.480355 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
I0522 17:53:27.539450 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
I0522 17:53:27.563897 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
I0522 17:53:40.888251 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
I0522 17:53:40.903676 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
I0522 17:53:40.903798 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
I0522 17:53:40.911852 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
I0522 17:53:40.911935 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
==> kube-proxy [faac4370a332] <==
I0522 17:53:25.952564 1 server_linux.go:69] "Using iptables proxy"
I0522 17:53:25.969504 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
I0522 17:53:25.993839 1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0522 17:53:25.993892 1 server_linux.go:165] "Using iptables Proxier"
I0522 17:53:25.996582 1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0522 17:53:25.996608 1 server_linux.go:528] "Defaulting to no-op detect-local"
I0522 17:53:25.996633 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0522 17:53:25.996844 1 server.go:872] "Version info" version="v1.30.1"
I0522 17:53:25.996866 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0522 17:53:26.043989 1 config.go:192] "Starting service config controller"
I0522 17:53:26.044016 1 shared_informer.go:313] Waiting for caches to sync for service config
I0522 17:53:26.044053 1 config.go:101] "Starting endpoint slice config controller"
I0522 17:53:26.044059 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0522 17:53:26.044235 1 config.go:319] "Starting node config controller"
I0522 17:53:26.044257 1 shared_informer.go:313] Waiting for caches to sync for node config
I0522 17:53:26.144579 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0522 17:53:26.144617 1 shared_informer.go:320] Caches are synced for service config
I0522 17:53:26.144751 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [f457f32fdd43] <==
W0522 17:53:09.146946 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0522 17:53:09.148467 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0522 17:53:09.146991 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0522 17:53:09.148499 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0522 17:53:09.146997 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0522 17:53:09.148518 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0522 17:53:09.148341 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0522 17:53:09.148545 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0522 17:53:09.148892 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0522 17:53:09.148932 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0522 17:53:10.084722 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0522 17:53:10.084765 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0522 17:53:10.112919 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0522 17:53:10.112952 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0522 17:53:10.175208 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0522 17:53:10.175254 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0522 17:53:10.194289 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0522 17:53:10.194329 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0522 17:53:10.200134 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0522 17:53:10.200161 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0522 17:53:10.203122 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0522 17:53:10.203156 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0522 17:53:10.344173 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0522 17:53:10.344204 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0522 17:53:12.774271 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345758 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e82964e-040d-419c-969e-e89b79f50b09-lib-modules\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345873 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e82964e-040d-419c-969e-e89b79f50b09-kube-proxy\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345899 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e82964e-040d-419c-969e-e89b79f50b09-xtables-lock\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345 2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987 2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888 2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504 2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898 2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915 2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188 2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
==> storage-provisioner [4aff7c101c8d] <==
I0522 17:53:26.489878 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0522 17:53:26.504063 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0522 17:53:26.504102 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0522 17:53:26.512252 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0522 17:53:26.512472 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
I0522 17:53:26.512684 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
I0522 17:53:26.612677 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run: kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (218.21s)